You are on page 1of 105

ARYA INSTITUTE OF ENGINEERING & TECHNOLOGY

Department of Computer Science Engineering &IT

MODEL PAPER & SOLUTION


(B. Tech. V Semester 2020- 2021)

5CS4-03 OPERATING SYSTEM

Max. Marks: 150 (IA:30, ETE:120) End Term Exam: 3 Hours

Unit 1:

Short Answers: (2 Marks Each)

Q. 1 What are the different services provided by the operating system?

Q. 2 What is Process? Define the different states of process.

Q. 3 Differentiate between multitasking and multiprogramming.

Q. 4 Define binary semaphore and mutex.

Q.5 what is critical section? Write the criteria for critical section problem solution?

Q.6 Describe multithreading and its various models.

Descriptive Answers: (5 to 20 Marks)

Q. 1 What is Critical section? Explain the Peterson’s solution of critical section problem with example.

Q. 2 How to lock variable method and strict alternation method used to solve critical section problem
explain with example? Also explain their disadvantages.

Q. 3 What is semaphore? How to solve critical section problem with the help of semaphore? Explain with
example
Q. 4 Consider the following processes with arrival time and burst time. Calculate average turnaround
time, average waiting time and average response time using round robin with time quantum 3?

Q.5 Critically evaluate the methods of message passing as a means of inter process communication.

Q.6 Explain the structure and operations of operating system.

Q.7 What is CPU scheduling algorithms? Explain the following scheduling algorithms:

1. FCFS 2. SJF 3. Priority scheduling 4. Round robin.

Unit 2:

Short Answers: (2 Marks Each)

Q. 1 What is virtual memory?

Q. 2 Explain following method of contiguous memory allocations:

1. First Fit 2. Best Fit 3. Worst Fit

Q. 3 What is demand paging?

Q. 4 What is Thrashing?

Q.5 Compare and contrast between internal fragmentation and external fragmentation.

Q.6 What is page hit and page fault in virtual memory concepts?
Descriptive Answers: (5 to 20 Marks)

Q. 1Given page reference string: 1,2,3,4,2,1,5,6,2,1,2,3,7,6,3,2,1,2,3,6 . Compare the number of page faults
for LRU, FIFO and Optimal page replacement algorithm. (Frame size:3)

Q. 2 Explain the term segmentation with proper example.

Q. 3 Explain the following page replacement algorithm:

1. FCFS 2. Optimal page replacement scheduling 3. LRU

Q. 4 Explain implementation of paging techniques.

Q.5Explain translation look aside buffer (TLB) with example?

Q.6 What is Memory allocation? Differentiate between contiguous and noncontiguous memory allocation
method.

Unit 3:

Short Answers: (2 Marks Each)


Q. 1 What is dead lock? Describe resource allocation graph.

Q. 2 What is device driver? Explain the device characteristics.

Q. 3 What is safe state?

Q. 4 Explain FCFS disk arm scheduling algorithm.

Q.5 What are four necessary conditions for deadlock?

Q.6 Define elevator algorithm?

Descriptive Answers: (5 to 20 Marks)

Q. 1 Write the condition for deadlock. Explain the protocol used to break the circular wait condition.

Q. 2 Explain how deadlock is prevented by operating system.

Q. 3 By using the Banker’s Algorithm, consider a system with five processes P0 through P4 and three
resource types A, B and C. resource type A has 10 instances, resource type B has 5 instances and resource
type C has 7 instances.

a) What Is the content of the Need matrix?

b) Is the system in a safe state?

Q. 4 Discuss Banker’s algorithm and safety algorithm?


Q.5 How to handle deadlock by operating system? Explain deadlock avoidance methods with example.

Q.6 Explain various disk scheduling algorithms in brief

Unit 4:

Short Answers: (2 Marks Each)

Q. 1What is File? Explain different types of Files.

Q. 2 What is file system?

Q. 3 What are attributes of File? Explain.

Q. 4 Define different types of file operations.

Q.5 Which types of device directory structure are used by Linux? Explain.

Descriptive Answers: (5 to 20 Marks)


Q. 1 What are the various access methods for file system?

Q. 2 Explain File structure.

Q. 3 Write short notes on file protections.

Q. 4 differentiate between ordinary file and device files.

Q.5 Describes the file authentication process in Linux operating system.

Unit 5:

Short Answers: (2 Marks Each)

Q. 1 Describe component of Linux operating system.

Q. 2 What do mean by Real Time? Discuss Hard real time and soft real time.

Q. 3 What is RTOS?

Q. 4 Discuss different types of mobile OS.

Q.5 Discuss similarities and differences Between UNIX And Linux.

Descriptive Answers: (5 to 20 Marks)

Q. 1 How process in manage in Linux system? Explain in details.

Q. 2 Write short notes on memory management in Linux system.

Q. 3 What are the advantage and disadvantage of writing an operating system in a high level language, such
as C?

Q. 4Would you classify Linux thread as user level threads or as kernel level threads? Support your answer
with appropriate arguments.

Q.5 Explain the architecture and application of Mobile Operating system.


Solution
Unit 1:

Short Answers: (2 Marks Each)

Q. 1 What are the different services provided by the operating system?

Ans:

An Operating System provides services to both the users and to the programs.

It provides programs an environment to execute.

It provides users the services to execute the programs in a convenient manner.

Following are a few common services provided by an operating system −

 Program execution
 I/O operations
 File System manipulation
 Communication
 Error Detection
 Resource Allocation
 Protection

Q. 2 What is Process? Define the different states of process.

Ans:

A process is basically a program in execution. The execution of a process must progress in a sequential
fashion.

A process is defined as an entity which represents the basic unit of work to be implemented in the system.
To put it in simple terms, we write our computer programs in a text file and when we execute this program,
it becomes a process which performs all the tasks mentioned in the program.

Process Life Cycle

When a process executes, it passes through different states. These stages may differ in different operating
systems, and the names of these states are also not standardized.

In general, a process can have one of the following five states at a time.

S.N. State & Description

1 Start

This is the initial state when a process is first started/created.

2 Ready

The process is waiting to be assigned to a processor. Ready processes are waiting to have the
processor allocated to them by the operating system so that they can run. Process may come
into this state after Start state or while running it by but interrupted by the scheduler to
assign CPU to some other process.

3 Running

Once the process has been assigned to a processor by the OS scheduler, the process state is
set to running and the processor executes its instructions.

4 Waiting

Process moves into the waiting state if it needs to wait for a resource, such as waiting for user
input, or waiting for a file to become available.

5 Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it is
moved to the terminated state where it waits to be removed from main memory.

Q. 3 Differentiate between multitasking and multiprogramming.

What is Multiprogramming?

Multiprogramming is the ability for more than one user to use the computer at a time using a single CPU.
The idea is to effectively utilize the processor to create multiple ready-to-run processes with each process
belongs to different user. If the current process stalls for some reason, because it has to wait for some
particular event, the operating system allocates the CPU to another process in the queue. The whole operation
is facilitated by multiprogramming operating systems to maximize CPU utilization so that to reduce the idle
time of the CPU. The idea is to keep the CPU busy for as long as possible.

What is Multitasking?

Multitasking means concurrent execution of multiple processes by one user on the same computer utilizing
multiple CPUs. For example, in a multitasking operating system, you may work on a word document with
one program while listening to music as the same time with another program. Multitasking is effective when
programs on a compute require a high degree of parallelism. It is based on the concept of time sharing
because multiple processes or tasks can be switched accordingly at a regular interval of time, so that the users
get the idea that they are performed concurrently.
Q. 4 Define binary semaphore and mutex.

Mutex and Semaphore both provide synchronization services but they are not the same. Details about both
Mutex and Semaphore are given below:

Mutex

Mutex is a mutual exclusion object that synchronizes access to a resource. It is created with a unique name at
the start of a program. The Mutex is a locking mechanism that makes sure only one thread can acquire the
Mutex at a time and enter the critical section. This thread only releases the Mutex when it exits the critical
section.

This is shown with the help of the following example:

wait (mutex);

…..

Critical Section

…..

signal (mutex);

A Mutex is different than a semaphore as it is a locking mechanism while a semaphore is a signalling


mechanism. A binary semaphore can be used as a Mutex but a Mutex can never be used as a semaphore.

Semaphore

A semaphore is a signalling mechanism and a thread that is waiting on a semaphore can be signaled by
another thread. This is different than a mutex as the mutex can be signaled only by the thread that called the
wait function.

A semaphore uses two atomic operations, wait and signal for process synchronization.

The wait operation decrements the value of its argument S, if it is positive. If S is negative or zero, then no
operation is performed.

wait(S)

while (S<=0);
S--;

The signal operation increments the value of its argument S.

signal(S)

S++;

There are mainly two types of semaphores i.e. counting semaphores and binary semaphores.

Counting Semaphores are integer value semaphores and have an unrestricted value domain. These
semaphores are used to coordinate the resource access, where the semaphore count is the number of available
resources.

The binary semaphores are like counting semaphores but their value is restricted to 0 and 1. The wait
operation only works when the semaphore is 1 and the signal operation succeeds when semaphore is 0.

Q.5 what is critical section? Write the criteria for critical section problem solution?

When more than one processes access a same code segment that segment is known as critical section.
Critical section contain shared variables or resources which are needs to be synchronized to maintiaon
consistency of data variable.

here are three requirements of critical section that should be satisfied:

(i) Mutual Exclusion


(ii) Progress, and

(iii) Bounded Waiting

Mutual Exclusion

In simple term no two processes may be simultaneously inside the same critical section. When a process is
executing in its critical section, no other processes can be executing in their critical sections.Mutual exclusion
can avoid race condition. This is most basic requirement for the solution.

Progress

No process running outside its critical region may block other processes. Suppose that no process in the
critical section, and one or more processes want to enter into the critical section, then one of them must be
able to enter into critical section. A process executing outside of its critical section can not prevent other
processes from entering theirs critical section.

One those processes waiting for critical section must take part in arbitration in a finite amount of time.

Bounded Waiting

No process should have to wait forever to enter a critical section. Bounded waiting ensures that there is no
starvation that means a process can not wait indefinite amount of time. There must exist a bound on the of
times that other processes are allowed to enter their critical sections before the request is granted.
Q.6 Describe multithreading and its various models.

Multithreading allows the execution of multiple parts of a program at the same time. These parts are known
as threads and are lightweight processes available within the process. Therefore, multithreading leads to
maximum utilization of the CPU by multitasking.

The main models for multithreading are one to one model, many to one model and many to many model.
Details about these are given as follows:

One to One Model

The one to one model maps each of the user threads to a kernel thread. This means that many threads can run
in parallel on multiprocessors and other threads can run when one thread makes a blocking system call.

A disadvantage of the one to one model is that the creation of a user thread requires a corresponding kernel
thread. Since a lot of kernel threads burden the system, there is restriction on the number of threads in the
system.

A diagram that demonstrates the one to one model is given as follows:

Many to One Model


The many to one model maps many of the user threads to a single kernel thread. This model is quite efficient
as the user space manages the thread management.

A disadvantage of the many to one model is that a thread blocking system call blocks the entire process.
Also, multiple threads cannot run in parallel as only one thread can access the kernel at a time.

A diagram that demonstrates the many to one model is given as follows:

Many to Many Model

The many to many model maps many of the user threads to a equal number or lesser kernel threads. The
number of kernel threads depends on the application or machine.

The many to many does not have the disadvantages of the one to one model or the many to one model. There
can be as many user threads as required and their corresponding kernel threads can run in parallel on a
multiprocessor.

A diagram that demonstrates the many to many model is given as follows:

Descriptive Answers: (5 to 20 Marks)

Q. 1 What is Critical section? Explain the Peterson’s solution of critical section problem with example.

The critical section is a code segment where the shared variables can be accessed. An atomic action is
required in a critical section i.e. only one process can execute in its critical section at a time. All the other
processes have to wait to execute in their critical sections.
A diagram that demonstrates the critical section is as follows:

In the above diagram, the entry section handles the entry into the critical section. It acquires the resources
needed for execution by the process. The exit section handles the exit from the critical section. It releases
the resources and also informs the other processes that the critical section is free.

Solution to the Critical Section Problem

The critical section problem needs a solution to synchronize the different processes. The solution to the
critical section problem must satisfy the following conditions:

1. Mutual Exclusion

Mutual exclusion implies that only one process can be inside the critical section at any time. If any
other processes require the critical section, they must wait until it is free.

2. Progress
Progress means that if a process is not using the critical section, then it should not stop any other
process from accessing it. In other words, any process can enter a critical section if it is free.

3. Bounded Waiting

Bounded waiting means that each process must have a limited waiting time. Itt should not wait
endlessly to access the critical section.

Peterson's Solution

 Peterson's Solution is a classic software-based solution to the critical section problem. It is


unfortunately not guaranteed to work on modern hardware, due to vagaries of load and store
operations, but it illustrates a number of important concepts.
 Peterson's solution is based on two processes, P0 and P1, which alternate between their critical
sections and remainder sections. For convenience of discussion, "this" process is Pi, and the
"other" process is Pj. ( I.e. j = 1 - i )
 Peterson's solution requires two shared data items:
o int turn - Indicates whose turn it is to enter into the critical section. If turn = = i, then
process i is allowed into their critical section.
o boolean flag[ 2 ] - Indicates when a process wants to enter into their critical section. When
process i wants to enter their critical section, it sets flag[ i ] to true.
 In the following diagram, the entry and exit sections are enclosed in boxes.
o In the entry section, process i first raises a flag indicating a desire to enter the critical
section.
o Then turn is set to j to allow the other process to enter their critical section if process j so
desires.
o The while loop is a busy loop ( notice the semicolon at the end ), which makes process i
wait as long as process j has the turn and wants to enter the critical section.
o Process i lowers the flag[ i ] in the exit section, allowing process j to continue if it has been
waiting.

Figure - The structure of process Pi in Peterson's solution.

 To prove that the solution is correct, we must examine the three conditions listed above:
1. Mutual exclusion - If one process is executing their critical section when the other wishes
to do so, the second process will become blocked by the flag of the first process. If both
processes attempt to enter at the same time, the last process to execute "turn = j" will be
blocked.
2. Progress - Each process can only be blocked at the while if the other process wants to use
the critical section ( flag[ j ] = = true ), AND it is the other process's turn to use the critical
section ( turn = = j ). If both of those conditions are true, then the other process ( j ) will be
allowed to enter the critical section, and upon exiting the critical section, will set flag[ j ] to
false, releasing process i. The shared variable turn assures that only one process at a time
can be blocked, and the flag variable allows one process to release the other when exiting
their critical section.
3. Bounded Waiting - As each process enters their entry section, they set the turn variable to
be the other processes turn. Since no process ever sets it back to their own turn, this
ensures that each process will have to let the other process go first at most one time before
it becomes their turn again.
 Note that the instruction "turn = j" is atomic, that is it is a single machine instruction which cannot
be interrupted.

Q. 2 How to lock variable method and strict alternation method used to solve critical section problem
explain with example? Also explain their disadvantages.

Mutex Locks

 The hardware solutions presented above are often difficult for ordinary programmers to access,
particularly on multi-processor machines, and particularly because they are often platform-
dependent.
 Therefore most systems offer a software API equivalent called mutex locks or simply mutexes. (
For mutual exclusion )
 The terminology when using mutexes is to acquire a lock prior to entering a critical section, and
to release it when exiting, as shown in Figure 5.8:
Figure - Solution to the critical-section problem using mutex locks

 Just as with hardware locks, the acquire step will block the process if the lock is in use by another
process, and both the acquire and release operations are atomic.
 Acquire and release can be implemented as shown here, based on a boolean variable "available":
 One problem with the implementation shown here, ( and in the hardware solutions presented
earlier ), is the busy loop used to block processes in the acquire phase. These types of locks are
referred to as spinlocks, because the CPU just sits and spins while blocking the process.
 Spinlocks are wasteful of cpu cycles, and are a really bad idea on single-cpu single-threaded
machines, because the spinlock blocks the entire computer, and doesn't allow any other process to
release the lock. ( Until the scheduler kicks the spinning process off of the cpu. )
 On the other hand, spinlocks do not incur the overhead of a context switch, so they are effectively
used on multi-threaded machines when it is expected that the lock will be released after a short
time.

Strict Alternation

Now let's try being polite and really take turns. None of this wanting stuff.

Initially turn=1
Code for P1 Code for P2

Loop forever { Loop forever {

while (turn = 2) {} while (turn = 1) {}

critical-section critical-section

turn <-- 2 turn <-- 1

non-critical-section } non-critical-section }

This one forces alternation, so is not general enough. Specifically, it does not satisfy condition three,
which requires that no process in its non-critical section can stop another process from entering its critical
section. With alternation, if one process is in its non-critical section (NCS) then the other can enter the CS
once but not again.

The first example violated rule 4 (the whole system blocked). The second example violated rule 1 (both in
the critical section. The third example violated rule 3 (one process in the NCS stopped another from
entering its CS).

Q. 3 What is semaphore? How to solve critical section problem with the help of semaphore? Explain
with example

Semaphores

 A more robust alternative to simple mutexes is to use semaphores, which are integer variables for
which only two ( atomic ) operations are defined, the wait and signal operations, as shown in the
following figure.
 Note that not only must the variable-changing steps ( S-- and S++ ) be indivisible, it is also
necessary that for the wait operation when the test proves false that there be no interruptions
before S gets decremented. It IS okay, however, for the busy loop to be interrupted when the test is
true, which prevents the system from hanging forever.

5.6.1 Semaphore Usage

 In practice, semaphores can take on one of two forms:


o Binary semaphores can take on one of two values, 0 or 1. They can be used to solve the
critical section problem as described above, and can be used as mutexes on systems that do
not provide a separate mutex mechanism.. The use of mutexes for this purpose is shown in
Figure 6.9 ( from the 8th edition ) below.
Mutual-exclusion implementation with semaphores. ( From 8th edition. )

o Counting semaphores can take on any integer value, and are usually used to count the
number remaining of some limited resource. The counter is initialized to the number of
such resources available in the system, and whenever the counting semaphore is greater
than zero, then a process can enter a critical section and use one of the resources. When the
counter gets to zero ( or negative in some implementations ), then the process blocks until
another process frees up a resource and increments the counting semaphore with a signal
call. ( The binary semaphore can be seen as just a special case where the number of
resources initially available is just one. )
o Semaphores can also be used to synchronize certain operations between processes. For
example, suppose it is important that process P1 execute statement S1 before process P2
executes statement S2.
 First we create a semaphore named synch that is shared by the two processes, and
initialize it to zero.
 Then in process P1 we insert the code:

S1;
signal( synch );

 and in process P2 we insert the code:


wait( synch );
S2;

 Because synch was initialized to 0, process P2 will block on the wait until after P1
executes the call to signal.

5.6.2 Semaphore Implementation

 The big problem with semaphores as described above is the busy loop in the wait call, which
consumes CPU cycles without doing any useful work. This type of lock is known as a spinlock,
because the lock just sits there and spins while it waits. While this is generally a bad thing, it does
have the advantage of not invoking context switches, and so it is sometimes used in multi-
processing systems when the wait time is expected to be short - One thread spins on one processor
while another completes their critical section on another processor.
 An alternative approach is to block a process when it is forced to wait for an available semaphore,
and swap it out of the CPU. In this implementation each semaphore needs to maintain a list of
processes that are blocked waiting for it, so that one of the processes can be woken up and
swapped back in when the semaphore becomes available. ( Whether it gets swapped back into the
CPU immediately or whether it needs to hang out in the ready queue for a while is a scheduling
problem. )
 The new definition of a semaphore and the corresponding wait and signal operations are shown as
follows:
 Note that in this implementation the value of the semaphore can actually become negative, in
which case its magnitude is the number of processes waiting for that semaphore. This is a result of
decrementing the counter before checking its value.
 Key to the success of semaphores is that the wait and signal operations be atomic, that is no other
process can execute a wait or signal on the same semaphore at the same time. ( Other processes
could be allowed to do other things, including working with other semaphores, they just can't have
access to this semaphore. ) On single processors this can be implemented by disabling interrupts
during the execution of wait and signal; Multiprocessor systems have to use more complex
methods, including the use of spinlocking.

5.6.3 Deadlocks and Starvation

 One important problem that can arise when using semaphores to block processes waiting for a
limited resource is the problem of deadlocks, which occur when multiple processes are blocked,
each waiting for a resource that can only be freed by one of the other ( blocked ) processes, as
illustrated in the following example. ( Deadlocks are covered more completely in chapter 7. )

 Another problem to consider is that of starvation, in which one or more processes gets blocked
forever, and never get a chance to take their turn in the critical section. For example, in the
semaphores above, we did not specify the algorithms for adding processes to the waiting queue in
the semaphore in the wait( ) call, or selecting one to be removed from the queue in the signal( )
call. If the method chosen is a FIFO queue, then every process will eventually get their turn, but if
a LIFO queue is implemented instead, then the first process to start waiting could starve.

5.6.4 Priority Inversion

 A challenging scheduling problem arises when a high-priority process gets blocked waiting for a
resource that is currently held by a low-priority process.
 If the low-priority process gets pre-empted by one or more medium-priority processes, then the
high-priority process is essentially made to wait for the medium priority processes to finish before
the low-priority process can release the needed resource, causing a priority inversion. If there are
enough medium-priority processes, then the high-priority process may be forced to wait for a very
long time.
 One solution is a priority-inheritance protocol, in which a low-priority process holding a resource
for which a high-priority process is waiting will temporarily inherit the high priority from the
waiting process. This prevents the medium-priority processes from preempting the low-priority
process until it releases the resource, blocking the priority inversion problem.

Q. 4 Consider the following processes with arrival time and burst time. Calculate average turnaround
time, average waiting time and average response time using round robin with time quantum 3?

Q.5 Critically evaluate the methods of message passing as a means of inter process communication.

Cooperating processes need interprocess communication (IPC) mechanism that will allow them to
exchange data and information

Two models of IPC

1. Shared memory:

a region of memory that is shared by cooperating processes is established

processes can exchange information by reading and writing data to the shared region
2. Message passing:

communication takes place by means of messages exchanged between the cooperating processes.

Shared memory vs. Message Passing

Message Passing is useful for exchanging smaller amounts of data easier to implement for
intercomputer communication. Shared memory is faster as message passing systems are typically
implemented using system calls and thus require the kernel intervention in shared-memory systems,
systems calls are required only to establish shared-memory regions and all accesses are treated as
classical memory accesses and no assistance from the kernel is required.

Q.6 Explain the structure and operations of operating system.

An operating system is a construct that allows the user application programs to interact with the system
hardware. Since the operating system is such a complex structure, it should be created with utmost care so
it can be used and modified easily. An easy way to do this is to create the operating system in parts. Each
of these parts should be well defined with clear inputs, outputs and functions.

Simple Structure

There are many operating systems that have a rather simple structure. These started as small systems and
rapidly expanded much further than their scope. A common example of this is MS-DOS. It was designed
simply for a niche amount for people. There was no indication that it would become so popular.

An image to illustrate the structure of MS-DOS is as follows:

It is better that operating systems have a modular structure, unlike MS-DOS. That would lead to greater
control over the computer system and its various applications. The modular structure would also allow the
programmers to hide information as required and implement internal routines as they see fit without
changing the outer specifications.
Layered Structure

One way to achieve modularity in the operating system is the layered approach. In this, the bottom layer is
the hardware and the topmost layer is the user interface.

An image demonstrating the layered approach is as follows:

As seen from the image, each upper layer is built on the bottom layer. All the layers hide some structures,
operations etc from their upper layers.

One problem with the layered structure is that each layer needs to be carefully defined. This is necessary
because the upper layers can only use the functionalities of the layers below them.

Q.7 What is CPU scheduling algorithms? Explain the following scheduling algorithms:

1. FCFS 2. SJF 3. Priority scheduling 4. Round robin.


1.FCFS:

First Come First Serve (FCFS)

 Jobs are executed on first come, first serve basis.


 It is a non-preemptive, pre-emptive scheduling algorithm.
 Easy to understand and implement.
 Its implementation is based on FIFO queue.
 Poor in performance as average wait time is high.

Wait time of each process is as follows −

Process Wait Time : Service Time - Arrival Time

P0 0-0=0

P1 5-1=4

P2 8-2=6

P3 16 - 3 = 13
Average Wait Time: (0+4+6+13) / 4 = 5.75

SJF:

Shortest Job Next (SJN)

 This is also known as shortest job first, or SJF


 This is a non-preemptive, pre-emptive scheduling algorithm.
 Best approach to minimize waiting time.
 Easy to implement in Batch systems where required CPU time is known in advance.
 Impossible to implement in interactive systems where required CPU time is not known.
 The processer should know in advance how much time process will take.

Given: Table of processes, and their Arrival time, Execution time

Process Arrival Time Execution Time Service Time

P0 0 5 0

P1 1 3 5

P2 2 8 14

P3 3 6 8

Waiting time of each process is as follows −

Process Waiting Time


P0 0-0=0

P1 5-1=4

P2 14 - 2 = 12

P3 8-3=5

Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25

Priority Based Scheduling

 Priority scheduling is a non-preemptive algorithm and one of the most common scheduling
algorithms in batch systems.
 Each process is assigned a priority. Process with highest priority is to be executed first and so on.
 Processes with same priority are executed on first come first served basis.
 Priority can be decided based on memory requirements, time requirements or any other resource
requirement.

Given: Table of processes, and their Arrival time, Execution time, and priority. Here we are considering
1 is the lowest priority.

Process Arrival Time Execution Time Priority Service Time

P0 0 5 1 0

P1 1 3 2 11

P2 2 8 1 14

P3 3 6 3 5
Waiting time of each process is as follows −

Process Waiting Time

P0 0-0=0

P1 11 - 1 = 10

P2 14 - 2 = 12

P3 5-3=2

Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6

Round Robin Scheduling

 Round Robin is the preemptive process scheduling algorithm.


 Each process is provided a fix time to execute, it is called a quantum.
 Once a process is executed for a given time period, it is preempted and other process executes for
a given time period.
 Context switching is used to save states of preempted processes.

Wait time of each process is as follows −

Process Wait Time : Service Time - Arrival Time


P0 (0 - 0) + (12 - 3) = 9

P1 (3 - 1) = 2

P2 (6 - 2) + (14 - 9) + (20 - 17) = 12

P3 (9 - 3) + (17 - 12) = 11

Average Wait Time: (9+2+12+11) / 4 = 8.5

Unit 2:

Short Answers: (2 Marks Each)

Q. 1 What is virtual memory?

Virtual Memory is a space where large programs can store themselves in form of pages while their
execution and only the required pages or portions of processes are loaded into the main memory. This
technique is useful as large virtual memory is provided for user programs when a very small physical
memory is there.

In real scenarios, most processes never need all their pages at once, for following reasons :

 Error handling code is not needed unless that specific error occurs, some of which are quite
rare.
 Arrays are often over-sized for worst-case scenarios, and only a small fraction of the arrays are
actually used in practice.
 Certain features of certain programs are rarely used.
Benefits of having Virtual Memory

1. Large programs can be written, as virtual space available is huge compared to physical
memory.
2. Less I/O required, leads to faster and easy swapping of processes.
3. More physical memory available, as programs are stored on virtual memory, so they occupy
very less space on actual physical memory.

Q. 2 Explain following method of contiguous memory allocations:

1. First Fit 2. Best Fit 3. Worst Fit

First Fit

In the first fit approach is to allocate the first free partition or hole large enough which can
accommodate the process. It finishes after finding the first suitable free partition.

Advantage

Fastest algorithm because it searches as little as possible.

Disadvantage

The remaining unused memory areas left after allocation become waste if it is too smaller. Thus
request for larger memory requirement cannot be accomplished.

Best Fit

The best fit deals with allocating the smallest free partition which meets the requirement of the
requesting process. This algorithm first searches the entire list of free partitions and considers the
smallest hole that is adequate. It then tries to find a hole which is close to actual process size needed.
Advantage

Memory utilization is much better than first fit as it searches the smallest free partition first available.

Disadvantage

It is slower and may even tend to fill up memory with tiny useless holes.

Worst fit

In worst fit approach is to locate largest available free portion so that the portion left will be big
enough to be useful. It is the reverse of best fit.

Advantage

Reduces the rate of production of small gaps.

Disadvantage

If a process requiring larger memory arrives at a later stage then it cannot be accommodated as the
largest hole is already split and occupied.

Q. 3 What is demand paging?

Demand paging is a type of swapping done in virtual memory systems. In demand paging, the data is
not copied from the disk to the RAM until they are needed or being demanded by some program. The
data will not be copied when the data is already available on the memory. This is otherwise called a
lazy evaluation because only the demanded pages of memory are being swapped from the secondary
storage (disk space) to the main memory. In contrast during pure swapping, all the memory for a
process is swapped from secondary storage to main memory during the process startup.

Q. 4 What is Thrashing?

Thrashing is a condition or a situation when the system is spending a major portion of its time in
servicing the page faults, but the actual processing done is very negligible.

Q.5 Compare and contrast between internal fragmentation and external fragmentation.

omparison Chart

BASIS FOR EXTERNAL


INTERNAL FRAGMENTATION
COMPARISON FRAGMENTATION

Basic It occurs when fixed sized memory It occurs when variable size
blocks are allocated to the processes. memory space are allocated to
the processes dynamically.

Occurrence When the memory assigned to the When the process is removed
process is slightly larger than the from the memory, it creates the
memory requested by the process this free space in the memory causing
creates free space in the allocated block external fragmentation.
causing internal fragmentation.

Solution The memory must be partitioned into Compaction, paging and


variable sized blocks and assign the segmentation.
best fit block to the process.

Q.6 What is page hit and page fault in virtual memory concepts?

If we find the required page in the Main Memory while C.P.U wants to access the page then it is
a Page Hit. If we do not find the required pages then they are called as Page Faults. At Page Faults,
we have to look into the Secondary Memory and fetch the required pages and load into the Main
Memory.

Descriptive Answers: (5 to 20 Marks)

Q. 1Given page reference string: 1,2,3,4,2,1,5,6,2,1,2,3,7,6,3,2,1,2,3,6 . Compare the number of page


faults for LRU, FIFO and Optimal page replacement algorithm. (Frame size:3)
Q. 2 Explain the term segmentation with proper example.

A process is divided into Segments. The chunks that a program is divided into which are not necessarily
all of the same sizes are called segments. Segmentation gives user’s view of the process which paging
does not give. Here the user’s view is mapped to physical memory.
There are types of segmentation:

1. Virtual memory segmentation –


Each process is divided into a number of segments, not all of which are resident at any one point
in time.
2. Simple segmentation –
Each process is divided into a number of segments, all of which are loaded into memory at run
time, though not necessarily contiguously.

There is no simple relationship between logical addresses and physical addresses in segmentation. A
table stores the information about all such segments and is called Segment Table.
Segment Table – It maps two-dimensional Logical address into one-dimensional Physical address. It’s
each table entry has:

 Base Address: It contains the starting physical address where the segments reside in memory.
 Limit: It specifies the length of the segment.

Translation of Two dimensional Logical Address to one dimensional Physical Address.


Address generated by the CPU is divided into:

 Segment number (s): Number of bits required to represent the segment.


 Segment offset (d): Number of bits required to represent the size of the segment.

Advantages of Segmentation –

 No Internal fragmentation.
 Segment Table consumes less space in comparison to Page table in paging.

Disadvantage of Segmentation –

 As processes are loaded and removed from the memory, the free memory space is broken into
little pieces, causing External fragmentation.
Q. 3 Explain the following page replacement algorithm:

1. FCFS 2. Optimal page replacement scheduling 3. LRU

First In First Out (FIFO) algorithm

 Oldest page in main memory is the one which will be selected for replacement.
 Easy to implement, keep a list, replace pages from the tail and add new pages at the head.

Optimal Page algorithm

 An optimal page-replacement algorithm has the lowest page-fault rate of all algorithms. An
optimal page-replacement algorithm exists, and has been called OPT or MIN.
 Replace the page that will not be used for the longest period of time. Use the time when a page
is to be used.
Least Recently Used (LRU) algorithm

 Page which has not been used for the longest time in main memory is the one which will be
selected for replacement.
 Easy to implement, keep a list, replace pages by looking back into time.

Q. 4 Explain implementation of paging techniques.


Paging

A computer can address more memory than the amount physically installed on the system. This extra
memory is actually called virtual memory and it is a section of a hard that's set up to emulate the
computer's RAM. Paging technique plays an important role in implementing virtual memory.

Paging is a memory management technique in which process address space is broken into blocks of the
same size called pages (size is power of 2, between 512 bytes and 8192 bytes). The size of the process
is measured in the number of pages.

Similarly, main memory is divided into small fixed-sized blocks of (physical) memory
called frames and the size of a frame is kept the same as that of a page to have optimum utilization of
the main memory and to avoid external fragmentation.
Address Translation

Page address is called logical address and represented by page number and the offset.

Logical Address = Page number + page offset

Frame address is called physical address and represented by a frame number and the offset.

Physical Address = Frame number + page offset

A data structure called page map table is used to keep track of the relation between a page of a
process to a frame in physical memory.

When the system allocates a frame to any page, it translates this logical address into a physical address
and create entry into the page table to be used throughout execution of the program.

When a process is to be executed, its corresponding pages are loaded into any available memory
frames. Suppose you have a program of 8Kb but your memory can accommodate only 5Kb at a given
point in time, then the paging concept will come into picture. When a computer runs out of RAM, the
operating system (OS) will move idle or unwanted pages of memory to secondary memory to free up
RAM for other processes and brings them back when needed by the program.

This process continues during the whole execution of the program where the OS keeps removing idle
pages from the main memory and write them onto the secondary memory and bring them back when
required by the program.

Advantages and Disadvantages of Paging

Here is a list of advantages and disadvantages of paging −

 Paging reduces external fragmentation, but still suffer from internal fragmentation.
 Paging is simple to implement and assumed as an efficient memory management technique.
 Due to equal size of the pages and frames, swapping becomes very easy.
 Page table requires extra memory space, so may not be good for a system having small RAM.

Q.5Explain translation look aside buffer (TLB) with example?

n Operating System (Memory Management Technique : Paging), for each process page table will be
created, which will contain Page Table Entry (PTE). This PTE will contain information like frame
number (The address of main memory where we want to refer), and some other useful bits (e.g.,
valid/invalid bit, dirty bit, protection bit etc). This page table entry (PTE) will tell where in the main
memory the actual page is residing.

Now the question is where to place the page table, such that overall access time (or reference time) will
be less.

The problem initially was to fast access the main memory content based on address generated by CPU
(i.e logical/virtual address). Initially, some people thought of using registers to store page table, as they
are high-speed memory so access time will be less.
The idea used here is, place the page table entries in registers, for each request generated from CPU
(virtual address), it will be matched to the appropriate page number of the page table, which will now
tell where in the main memory that corresponding page resides. Everything seems right here, but the
problem is register size is small (in practical, it can accommodate maximum of 0.5k to 1k page table
entries) and process size may be big hence the required page table will also be big (lets say this page
table contains 1M entries), so registers may not hold all the PTE’s of Page table. So this is not a
practical approach.

To overcome this size issue, the entire page table was kept in main memory. but the problem here is two
main memory references are required:

1. To find the frame number


2. To go to the address specified by frame number

To overcome this problem a high-speed cache is set up for page table entries called a Translation
Lookaside Buffer (TLB). Translation Lookaside Buffer (TLB) is nothing but a special cache used to
keep track of recently used transactions. TLB contains page table entries that have been most recently
used. Given a virtual address, the processor examines the TLB if a page table entry is present (TLB hit),
the frame number is retrieved and the real address is formed. If a page table entry is not found in the
TLB (TLB miss), the page number is used to index the process page table. TLB first checks if the page
is already in main memory, if not in main memory a page fault is issued then the TLB is updated to
include the new page entry.
Steps in TLB hit:

1. CPU generates virtual address.


2. It is checked in TLB (present).
3. Corresponding frame number is retrieved, which now tells where in the main memory page lies.

Steps in Page miss:

1. CPU generates virtual address.


2. It is checked in TLB (not present).
3. Now the page number is matched to page table residing in main memory (assuming page table
contains all PTE).
4. Corresponding frame number is retrieved, which now tells where in the main memory page lies.
5. The TLB is updated with new PTE (if space is not there, one of the replacement technique comes
into picture i.e either FIFO, LRU or MFU etc).

Effective memory access time(EMAT) : TLB is used to reduce effective memory access time as it is a
high speed associative cache.
EMAT = h*(c+m) + (1-h)*(c+2m)
where, h = hit ratio of TLB
m = Memory access time
c = TLB access time

Q.6 What is Memory allocation? Differentiate between contiguous and noncontiguous memory
allocation method.

Memory allocation is a process by which computer programs and services are assigned with physical
or virtual memory space. Memory allocation is the process of reserving a partial or complete portion
of computer memory for the execution of programs and processes.

Comparison Chart

BASIS THE CONTIGUOUS MEMORY NONCONTIGUOUS MEMORY


COMPARISON ALLOCATION ALLOCATION

Basic Allocates consecutive blocks of Allocates separate blocks of memory to


memory to a process. a process.

Overheads Contiguous memory allocation Noncontiguous memory allocation has


does not have the overhead of overhead of address translation while
address translation while execution of a process.
execution of a process.

Execution rate A process executes fatser in A process executes quite slower


contiguous memory allocation comparatively in noncontiguous
memory allocation.

Solution The memory space must be Divide the process into several blocks
divided into the fixed-sized and place them in different parts of the
partition and each partition is memory according to the availability of
allocated to a single process memory space available.
only.

Table A table is maintained by A table has to be maintained for each


operating system which process that carries the base addresses
maintains the list of available of each block which has been acquired
and occupied partition in the by a process in memory.
memory space

Unit 3:

Short Answers: (2 Marks Each)

Q. 1 What is dead lock? Describe resource allocation graph.

deadlock is a situation in which two computer programs sharing the same resource are effectively
preventing each other from accessing the resource, resulting in both programs ceasing to function.
The earliest computer operating systems ran only one program at a time.

resource allocation graph is explained to us what is the state of the system in terms of processes and
resources. Like how many resources are available, how many are allocated and what is the request of
each process. Everything can be represented in terms of the diagram. One of the advantages of having
a diagram is, sometimes it is possible to see a deadlock directly by using RAG, but then you might not
be able to know that by looking at the table. But the tables are better if the system contains lots of
process and resource and Graph is better if the system contains less number of process and resource.
We know that any graph contains vertices and edges. So RAG also contains vertices and edges. In
RAG vertices are two type –

1. Process vertex – Every process will be represented as a process vertex.Generally, the process will
be represented with a circle.
2. Resource vertex – Every resource will be represented as a resource vertex. It is also two type –

 Single instance type resource – It represents as a box, inside the box, there will be one dot.So
the number of dots indicate how many instances are present of each resource type.
 Multi-resource instance type resource – It also represents as a box, inside the box, there will
be many dots present.
Now coming to the edges of RAG.There are two types of edges in RAG –

1. Assign Edge – If you already assign a resource to a process then it is called Assign edge.
2. Request Edge – It means in future the process might want some resource to complete the
execution, that is called request edge.
So, if a process is using a resource, an arrow is drawn from the resource node to the process node. If a
process is requesting a resource, an arrow is drawn from the process node to the resource node.

Example 1 (Single instances RAG) –


If there is a cycle in the Resource Allocation Graph and each resource in the cycle provides only one
instance, then the processes will be in deadlock. For example, if process P1 holds resource R1, process
P2 holds resource R2 and process P1 is waiting for R2 and process P2 is waiting for R1, then process
P1 and process P2 will be in deadlock.
Here’s another example, that shows Processes P1 and P2 acquiring resources R1 and R2 while process
P3 is waiting to acquire both resources. In this example, there is no deadlock because there is no
circular dependency.
So cycle in single-instance resource type is the sufficient condition for deadlock.

Q. 2 What is device driver? Explain the device characteristics.

More commonly known as a driver, a device driver or hardware driver is a group of files that
enable one or more hardware devices to communicate with the computer's operating system. Without
drivers, the computer would not be able to send and receive data correctly to hardware devices, such
as a printer.

If the appropriate driver is not installed, the device may not function properly, if at all. For Microsoft
Windows users, a driver conflict or an error can be seen in the Device Manager. If problems or
conflicts are encountered with a driver, the computer manufacturer or hardware manufacturer will
release a driver update to fix the problems.

Q. 3 What is safe state?

A state is safe if the system can allocate all resources requested by all processes ( up to their stated
maximums ) without entering a deadlock state

Q. 4 Explain FCFS disk arm scheduling algorithm.

1. FCFS: FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the requests are
addressed in the order they arrive in the disk queue.

Advantages:

 Every request gets a fair chance


 No indefinite postponement

Disadvantages:

 Does not try to optimize seek time


 May not provide the best possible service

Q.5 What are four necessary conditions for deadlock?

Four Necessary and Sufficient Conditions for Deadlock

1. mutual exclusion. The resources involved must be unshareable; otherwise, the processes would
not be prevented from using the resource when necessary.
2. hold and wait or partial allocation. ...
3. no pre-emption. ...
4. resource waiting or circular wait.

Q.6 Define elevator algorithm?

Also known as SCAN Disk arm scheduling algorithms. In SCAN algorithm the disk arm moves
into a particular direction and services the requests coming in its path and after reaching the end of
disk, it reverses its direction and again services the request arriving in its path. So, this algorithm
works as an elevator and hence also known as elevator algorithm. As a result, the requests at the
midrange are serviced more and those arriving behind the disk arm will have to wait.

Advantages:

 High throughput
 Low variance of response time
 Average response time

Descriptive Answers: (5 to 20 Marks)

Q. 1 Write the condition for deadlock. Explain the protocol used to break the circular wait condition.

Deadlock Conditions

1. mutual exclusion
The resources involved must be unshareable; otherwise, the processes would not be prevented from
using the resource when necessary.
2. hold and wait or partial allocation
The processes must hold the resources they have already been allocated while waiting for other
(requested) resources. If the process had to release its resources when a new resource or resources
were requested, deadlock could not occur because the process would not prevent others from using
resources that it controlled.
3. no pre-emption
The processes must not have resources taken away while that resource is being used. Otherwise,
deadlock could not occur since the operating system could simply take enough resources from
running processes to enable any process to finish.
4. resource waiting or circular wait
A circular chain of processes, with each process holding resources which are currently being
requested by the next process in the chain, cannot exist. If it does, the cycle theorem (which states
that "a cycle in the resource graph is necessary for deadlock to occur") indicated that deadlock could
occur.

Eliminate Circular Wait


Each resource will be assigned with a numerical number. A process can request the resources
increasing/decreasing. order of numbering.
For Example, if P1 process is allocated R5 resources, now next time if P1 ask for R4, R3 lesser than R5
such request will not be granted, only request for resources more than R5 will be granted.

Q. 2 Explain how deadlock is prevented by operating system.

deadlock has following characteristics.

1. Mutual Exclusion
2. Hold and Wait
3. No preemption
4. Circular wait

Deadlock Prevention

We can prevent Deadlock by eliminating any of the above four conditions.


Eliminate Mutual Exclusion
It is not possible to dis-satisfy the mutual exclusion because some resources, such as the tape drive and
printer, are inherently non-shareable.

Eliminate Hold and wait

1. Allocate all required resources to the process before the start of its execution, this way hold and wait
condition is eliminated but it will lead to low device utilization. for example, if a process requires
printer at a later time and we have allocated printer before the start of its execution printer will remain
blocked till it has completed its execution.
2. The process will make a new request for resources after releasing the current set of resources. This
solution may lead to starvation.

Eliminate No Preemption
Preempt resources from the process when resources required by other high priority processes.

Eliminate Circular Wait


Each resource will be assigned with a numerical number. A process can request the resources
increasing/decreasing. order of numbering.
For Example, if P1 process is allocated R5 resources, now next time if P1 ask for R4, R3 lesser than R5 such
request will not be granted, only request for resources more than R5 will be granted.

Q. 3 By using the Banker’s Algorithm, consider a system with five processes P0 through P4 and three
resource types A, B and C. resource type A has 10 instances, resource type B has 5 instances and resource
type C has 7 instances.

a) What Is the content of the Need matrix?


b) Is the system in a safe state?

Yes.

Q. 4 Discuss Banker’s algorithm and safety algorithm?

Deadlock Avoidance

Deadlock avoidance can be done with Banker’s Algorithm.

Banker’s Algorithm

Bankers’s Algorithm is resource allocation and deadlock avoidance algorithm which test all the request made
by processes for resources, it checks for the safe state, if after granting request system remains in the safe
state it allows the request and if there is no safe state it doesn’t allow the request made by the process.
Inputs to Banker’s Algorithm:

1. Max need of resources by each process.


2. Currently allocated resources by each process.
3. Max free available resources in the system.

The request will only be granted under the below condition:

1. If the request made by the process is less than equal to max need to that process.
2. If the request made by the process is less than equal to the freely available resource in the system.

Example:

Total resources in system:

ABCD

6576

Available system resources are:

ABCD

3112

Processes (currently allocated resources):

ABCD

P1 1 2 2 1

P2 1 0 3 3

P3 1 2 1 0
Processes (maximum resources):

ABCD

P1 3 3 2 2

P2 1 2 3 4

P3 1 3 5 0

Need = maximum resources - currently allocated resources.

Processes (need resources):

ABCD

P1 2 1 0 1

P2 0 2 0 1

P3 0 1 4 0

Q.5 How to handle deadlock by operating system? Explain deadlock avoidance methods with example.

Deadlock

In the multiprogramming operating system, there are a number of processing which fights for a finite number
of resources and sometimes waiting process never gets a chance to change its state because the resources for
which it is waiting are held by another waiting process. A set of a process is called deadlock when they are
waiting for the happening of an event which is called by some another event in the same set.
Here every process will follow the system model which means the process requests a resource if not
allocated then wait otherwise it allocated will use the resources and release it after use.

Methods for handling deadlock

There are mainly four methods for handling deadlock.

1. Deadlock ignorance

It is the most popular method and it acts as if no deadlock and the user will restart. As handling deadlock is
expensive to be called of a lot of codes need to be altered which will decrease the performance so for less
critical jobs deadlock are ignored. Ostrich algorithm is used in deadlock Ignorance. Used in windows, Linux
etc.

2. Deadlock prevention

It means that we design such a system where there is no chance of having a deadlock.
 Mutual exclusion:
It can’t be resolved as it is the hardware property. For example, the printer cannot be simultaneously shared
by several processes. This is very difficult because some resources are not sharable.
 Hold and wait:
Hold and wait can be resolved using the conservative approach where a process can start it and only if it has
acquired all the resources.
 Active approach:
Here the process acquires only requires resources but whenever a new resource requires it must first release
all the resources.
 Wait time out:
Here there is a maximum time bound until which a process can wait for other resources after which it must
release the resources.
 Circular wait:
In order to remove circular wait, we assign a number to every resource and the process can request only in
the increasing order otherwise the process must release all the high number acquires resources and then make
a fresh request.

 No pre-emption:
In no pre-emption, we allow forceful pre-emption where a resource can be forcefully pre-empted. The pre-
empted resource is added to the list of resources where the process is waiting. The new process can be
restarted only when it regains its old resources. Priority must be given to a process which is in waiting for
state.

3. Deadlock avoidance

Here whenever a process enters into the system it must declare maximum demand. To the deadlock problem
before the deadlock occurs. This approach employs an algorithm to access the possibility that deadlock
would occur and not act accordingly. If the necessary condition of deadlock is in place it is still possible to
avoid feedback by allocating resources carefully.
A deadlock avoidance algorithm dynamically examines the resources allocation state to ensure that a
circular wait condition case never exists. Where the resources allocation state is defined by the of available
and allocated resources and the maximum demand of the process. There are 3 states of the system:

Safe state

When a system can allocate the resources to the process in such a way so that they still avoid deadlock then
the state is called safe state. When there is a safe sequence exit then we can say that the system is in the safe
state.

A sequence is in the safe state only if there exists a safe sequence. A sequence of process P1, P2, Pn is a safe
sequence for the current allocation state if for each Pi the resources request that Pi can still make can be
satisfied by currently available resources pulls the resources held by all Pj with j<i.

Methods for deadlock avoidance

1) Resource allocation graph

This graph is also kind of graphical bankers' algorithm where a process is denoted by a circle Pi and
resources is denoted by a rectangle RJ (.dots) inside the resources represents copies.

Presence of a cycle in the resources allocation graph is necessary but not sufficient condition for detection of
deadlock. If the type of every resource has exactly one copy than the presence of cycle is necessary as well as
sufficient condition for detection of deadlock.

This is in unsafe state (cycle exist) if P1 request P2 and P2 request R1 then deadlock will occur.

2) Bankers’s algorithm

The resource allocation graph algorithms not applicable to the system with multiple instances of the type of
each resource. So for this system Banker’s algorithm is used.

Here whenever a process enters into the system it must declare maximum demand possible.

At runtime, we maintain some data structure like current allocation, current need, current available etc.
Whenever a process requests some resources we first check whether the system is in a safe state or not
meaning if every process requires maximum resources then is there ant sequence in which request can be
entertaining if yes then request is allocated otherwise rejected.
Safety algorithm

This algorithm is used to find whether system is in safe state or not we can find

Remaining Need = Max Need – Current allocation

Current available = Total available – Current allocation

Let's understand it by an example:

Consider the following 3 process total resources are given for A= 6, B= 5, C= 7, D = 6

First we find the need matrix by Need= maximum – allocation

Then find available resources = total – allocated

A B C D( 6 5 7 6) - A B C D( 3 4 6 4

Available resources A B C D( 3 1 1 2)
Then we check whether the system is in deadlock or not and find the safe sequence of process.

P1 can be satisfied

Available= P1 allocated + available

( 1, 2, 2, 1) +( 3, 1, 1,2) = (4, 3, 3, 3)

P2 can be satisfied

Available= P2 allocated + available

(1, 0, 3, 3) + (4, 3, 3, 3) = (5, 3, 6, 6)

P3 can be satisfied

Available= P3 allocated + available

(1, 2, 1, 0) + (5, 3, 6, 6) = (6, 5, 7, 6)

So the system is safe and the safe sequence is P1 → P2 → P3

Q.6 Explain various disk scheduling algorithms in brief

Disk Scheduling Algorithms

Disk scheduling is done by operating systems to schedule I/O requests arriving for the disk. Disk scheduling
is also known as I/O scheduling.

Disk scheduling is important because:

 Multiple I/O requests may arrive by different processes and only one I/O request can be served at a
time by the disk controller. Thus other I/O requests need to wait in the waiting queue and need to be
scheduled.
 Two or more request may be far from each other so can result in greater disk arm movement.
 Hard drives are one of the slowest parts of the computer system and thus need to be accessed in an
efficient manner.

There are many Disk Scheduling Algorithms but before discussing them let’s have a quick look at some of
the important terms:

 Seek Time:Seek time is the time taken to locate the disk arm to a specified track where the data is to be
read or write. So the disk scheduling algorithm that gives minimum average seek time is better.
 Rotational Latency: Rotational Latency is the time taken by the desired sector of disk to rotate into a
position so that it can access the read/write heads. So the disk scheduling algorithm that gives minimum
rotational latency is better.
 Transfer Time: Transfer time is the time to transfer the data. It depends on the rotating speed of the
disk and number of bytes to be transferred.
 Disk Access Time: Disk Access Time is:

Disk Access Time = Seek Time +

Rotational Latency +

Transfer Time
 Disk Response Time: Response Time is the average of time spent by a request waiting to perform its
I/O operation. Average Response time is the response time of the all requests. Variance Response
Time is measure of how individual request are serviced with respect to average response time. So the
disk scheduling algorithm that gives minimum variance response time is better.

Disk Scheduling Algorithms

1. FCFS: FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the requests are
addressed in the order they arrive in the disk queue.

Advantages:

 Every request gets a fair chance


 No indefinite postponement

Disadvantages:

 Does not try to optimize seek time


 May not provide the best possible service

2. SSTF: In SSTF (Shortest Seek Time First), requests having shortest seek time are executed first. So,
the seek time of every request is calculated in advance in the queue and then they are scheduled
according to their calculated seek time. As a result, the request near the disk arm will get executed first.
SSTF is certainly an improvement over FCFS as it decreases the average response time and increases
the throughput of system.

Advantages:

 Average Response Time decreases


 Throughput increases

Disadvantages:

 Overhead to calculate seek time in advance


 Can cause Starvation for a request if it has higher seek time as compared to incoming requests
 High variance of response time as SSTF favours only some requests

3. SCAN: In SCAN algorithm the disk arm moves into a particular direction and services the requests
coming in its path and after reaching the end of disk, it reverses its direction and again services the
request arriving in its path. So, this algorithm works as an elevator and hence also known as elevator
algorithm. As a result, the requests at the midrange are serviced more and those arriving behind the
disk arm will have to wait.

Advantages:

 High throughput
 Low variance of response time
 Average response time

Disadvantages:

 Long waiting time for requests for locations just visited by disk arm

4. CSCAN: In SCAN algorithm, the disk arm again scans the path that has been scanned, after reversing
its direction. So, it may be possible that too many requests are waiting at the other end or there may be
zero or few requests pending at the scanned area.

These situations are avoided in CSCAN algorithm in which the disk arm instead of reversing its direction
goes to the other end of the disk and starts servicing the requests from there. So, the disk arm moves in a
circular fashion and this algorithm is also similar to SCAN algorithm and hence it is known as C-SCAN
(Circular SCAN).

Advantages:

 Provides more uniform wait time compared to SCAN

5. LOOK: It is similar to the SCAN disk scheduling algorithm except for the difference that the disk arm
in spite of going to the end of the disk goes only to the last request to be serviced in front of the head
and then reverses its direction from there only. Thus it prevents the extra delay which occurred due to
unnecessary traversal to the end of the disk.

6. CLOOK: As LOOK is similar to SCAN algorithm, in similar way, CLOOK is similar to CSCAN disk
scheduling algorithm. In CLOOK, the disk arm in spite of going to the end goes only to the last request
to be serviced in front of the head and then from there goes to the other end’s last request. Thus, it also
prevents the extra delay which occurred due to unnecessary traversal to the end of the disk.

Each algorithm is unique in its own way. Overall Performance depends on the number and type of requests.

Unit 4:

Short Answers: (2 Marks Each)


Q. 1What is File? Explain different types of Files.

Files: As we know that Computers are used for storing the information for a Permanent Time or the
Files are used for storing the Data of the users for a Long time Period. And the files can contains any
type of information means they can Store the text, any Images or Pictures or any data in any Format.
So that there must be Some Mechanism those are used for Storing the information, Accessing the
information and also Performing Some Operations on the files.

1) Ordinary Files or Simple File: Ordinary File may belong to any type of Application for example
notepad, paint, C Program, Songs etc. So all the Files those are created by a user are Ordinary Files.
Ordinary Files are used for Storing the information about the user Programs. With the help of
Ordinary Files we can store the information which contains text, database, any image or any other type
of information.
2) Directory files: The Files those are Stored into the a Particular Directory or Folder. Then these are
the Directory Files. Because they belongs to a Directory and they are Stored into a Directory or
Folder. For Example a Folder Name Songs which Contains Many Songs So that all the Files of Songs
are known as Directory Files.
3) Special Files: The Special Files are those which are not created by the user. Or The Files those are
necessary to run a System. The Files those are created by the System. Means all the Files of an
Operating System or Window, are refers to Special Files. There are Many Types of Special Files,
System Files, or windows Files, Input output Files. All the System Files are Stored into the System by
using. sys Extension.
4) FIFO Files: The First in First Out Files are used by the System for Executing the Processes into
Some Order. Means To Say the Files those are Come first, will be Executed First and the System
Maintains a Order or Sequence Order. When a user Request for a Service from the System, then the
Requests of the users are Arranged into Some Files and all the Requests of the System will be
performed by the System by using Some Sequence Order in which they are Entered or we can say that
all the files or Requests those are Received from the users will be Executed by using Some Order
which is also called as First in First Out or FIFO order.
Q. 2 What is file system?

A file system is a process that manages how and where data on a storage disk, typically a hard disk
drive (HDD), is stored, accessed and managed. It is a logical disk component that manages a disk's
internal operations as it relates to a computer and is abstract to a human user.

Q. 3 What are attributes of File? Explain.

Attributes of a File

Following are some of the attributes of a file :

 Name . It is the only information which is in human-readable form.


 Identifier. The file is identified by a unique tag(number) within file system.
 Type. It is needed for systems that support different types of files.
 Location. Pointer to file location on device.
 Size. The current size of the file.
 Protection. This controls and assigns the power of reading, writing, executing.
 Time, date, and user identification. This is the data for protection, security, and usage
monitoring.

Q. 4 Define different types of file operations.

Types of File Operations

Files are not made for just reading the Contents, we can also Perform Some other operations on the
Files those are Explained below As :

1) Read Operation: Meant To Read the information which is Stored into the Files.
2) Write Operation: For inserting some new Contents into a File.
3) Rename or Change the Name of File.
4) Copy the File from one Location to another.
5) Sorting or Arrange the Contents of File.
6) Move or Cut the File from One Place to Another.
7) Delete a File
8) Execute Means to Run Means File Display Output.

Q.5 Which types of device directory structure are used by Linux? Explain.

The Filesystem Hierarchy Standard (FHS) defines the structure of file systems on Linux and other
UNIX-like operating systems. However, Linux file systems also contain some directories that aren’t
yet defined by the standard.

1. Tree-structured directory –
Once we have seen a two-level directory as a tree of height 2, the natural generalization is to
extend the directory structure to a tree of arbitrary height.
This generalization allows the user to create there own subdirectories and to organize on their
files accordingly.
A tree structure is the most common directory structure. The tree has a root directory, and every
file in the system have a unique path.

Advantages:

 Very generalize, since full path name can be given.


 Very scalable, the probability of name collision is less.
 Searching becomes very easy, we can use both absolute path as well as relative.

Disadvantages:
 Every file does not fit into the hierarchical model, files may be saved into multiple
directories.
 We can not share files.
 It is inefficient, because accessing a file may go under multiple directories.

Descriptive Answers: (5 to 20 Marks)

Q. 1 What are the various access methods for file system?

When a file is used, information is read and accessed into computer memory and there are several
ways to access this information of the file. Some systems provide only one access method for files.
Other systems, such as those of IBM, support many access methods, and choosing the right one for
a particular application is a major design problem.

There are three ways to access a file into a computer system: Sequential-Access, Direct Access,
Index sequential Method.

1. Sequential Access –
It is the simplest access method. Information in the file is processed in order, one record after
the other. This mode of access is by far the most common; for example, editor and compiler
usually access the file in this fashion.

Read and write make up the bulk of the operation on a file. A read operation -read next- read
the next position of the file and automatically advance a file pointer, which keeps track I/O
location. Similarly, for the writewrite next append to the end of the file and advance to the
newly written material.

Key points:

 Data is accessed one record right after another record in an order.


 When we use read command, it move ahead pointer by one
 When we use write command, it will allocate memory and move the pointer to the end of
the file
 Such a method is reasonable for tape.
2. Direct Access –
Another method is direct access method also known as relative access method. A filed-length
logical record that allows the program to read and write record rapidly. in no particular order.
The direct access is based on the disk model of a file since disk allows random access to any
file block. For direct access, the file is viewed as a numbered sequence of block or record.
Thus, we may read block 14 then block 59 and then we can write block 17. There is no
restriction on the order of reading and writing for a direct access file.

A block number provided by the user to the operating system is normally a relative block
number, the first relative block of the file is 0 and then 1 and so on.

3. Index sequential method –


It is the other method of accessing a file which is built on the top of the direct access method.
These methods construct an index for the file. The index, like an index in the back of a book,
contains the pointer to the various blocks. To find a record in the file, we first search the index
and then by the help of pointer we access the file directly.

Key points:

 It is built on top of Sequential access.


 It control the pointer by using index.

Q. 2 Explain File structure.

File Structures is the Organization of Data in Secondary Storage Device in such a way that
minimize the access time and the storage space. A File Structure is a combination of
representations for data in files and of operations for accessing the data. A File Structure allows
applications to read, write and modify data. It might also support finding the data that matches
some search criteria or reading through the data in some particular order

Q. 3 Write short notes on file protections.

File-based authentication allows you to store usernames, passwords or password hashes and
optional meta-data in a file that will be used to authenticate incoming connections.

File-based authentication is a good choice for scenarios with smaller amounts of connections that
need authenticating, e.g. publicly readable realtime dashboards with a small number of provider
processes delivering the data.

Linux File Permissions

The file permissions can be seen by using the ls command with the -l (long listing option)
as shown below

ls -l

total 0

-rwxr-xr-x 1 stewart stewart 0 2009-01-30 17:00 executable

-r--r--r-- 1 stewart stewart 0 2009-01-30 16:58 read-only-all.txt


-rw-rw-r-- 1 stewart engineers 0 2009-01-30 16:59 read-write-group.txt

From the ls output the file permissions can be seen at the left.
These are the first 10 characters of the file entry. The first character relates to the file type
then the remaining are in 3 groups of 3 characters relating to the different access types.

These permissions are applied to (left to right)


user - the owner of the file
group - a group of people, e.g. a project team or department
others - anyone else that has a login to the computer

These are then split into 3 different permissions, that of being able to:

read - Look at the contents of a file / find out what files are in a directory

write - Change or delete the contents of a file / create or remove files in a directory

execute - Can execute (run as a program) a file / can change to the directory or copy from
the directory.

These are laid out as follows (note these are the first 10 characters of the ls -l display):

Access permission layout


If the entry is filled in then it is in affect. If it is dashed out '-' then it does not apply.

There are also further permissions that can be set, however these are more advanced and
are explained later. Also note that root can override most of the permissions.

Changing File Permissions (chmod)

Assuming that you are either the owner of the file or root it is possible for you to change
the permissions of a file to either add or remove permissions. This is done using the
chmod (change mode) command.

The chmod command can be used in one of two ways. The Symbolic Format or the octal
format. Symbolic is useful for new users as it is easier to use, however if effort is made to
understand the octal format then it can be a powerful and quick way of changing file
permissions.

The basic format of the command is:

chmod mode filename

It is only the format of the mode parameter that is different when using the different
permission formats.

In symbolic format permissions are added or deleted using the following symbols

u = owner of the file (user)

g = groups owner (group)

o = anyone else on the system (other)


+ = add permission

- = remove permission

r = read permission

w = write permission

x = execute permission

For example to add write access to the group the following command is used:

chmod g+w file1

In Octal format the mode is based upon a octal number representing the different mode
permissions, where each of the permission groups (user, group, others) has an octal value
representing the read, write and execute bits. This requires a little bit of knowledge on
binary or octal number bases. The format is actually octal (but this can be likened to 3
separate binary to decimal conversions for each of the user/group/all permissions). The
main benefits of using octal format is that all the permissions are set at the same time and
the command is much shorter than if all the permissions were set using the symbolic
format.

User Group Others

Symbolic rwx rw- r--

Binary 111 110 100

4+2+1 4+2+0 4+0+0

Octal 7 6 4

The above file would have the octal number 764 and would therefore be changed using
the command

chmod 764 file1

An alternative way of working out the octal values is to add the following numbers
depending upon the permission required.
Read = 4
Write = 2
Execute = 1

Therefore if you wanted to set read to yes, write to no and execute to yes, this would be
4+1=5

Q. 4 differentiate between ordinary file and device files.

Ordinary files, or simply files, are files that can hold documents, pictures, programs, and other
kinds of data. Directory files, also referred to as directories or folders, can hold ordinary files and
other directory files.

Text Files

Text files are regular files that contain information readable by the user. This information is stored
in ASCII. You can display and print these files. The lines of a text file must not
contain NUL characters, and none can exceed {LINE_MAX} bytes in length, including the new-
line character.

The term text file does not prevent the inclusion of control or other nonprintable characters (other
than NUL). Therefore, standard utilities that list text files as inputs or outputs are either able to
process the special characters gracefully or they explicitly describe their limitations within their
individual sections.

Binary Files

Binary files are regular files that contain information readable by the computer. Binary files may be
executable files that instruct the system to accomplish a job. Commands and programs are stored in
executable, binary files. Special compiling programs translate ASCII text into binary code.

The only difference between text and binary files is that text files have lines of less
than {LINE_MAX} bytes, with no NUL characters, each terminated by a new-line character.

Directory Files

Directory files contain information the system needs to access all types of files, but they do not
contain the actual file data. As a result, directories occupy less space than a regular file and give the
file system structure flexibility and depth. Each directory entry represents either a file or a
subdirectory. Each entry contains the name of the file and the file's index node reference number (i-
node). The i-node points to the unique index node assigned to the file. The i-node describes the
location of the data associated with the file. Directories are created and controlled by a separate set
of commands.

Special Files

Special files define devices for the system or temporary files created by processes. There are three
basic types of special files: FIFO (first-in, first-out), block, and character. FIFO files are also called
pipes. Pipes are created by one process to temporarily allow communication with another process.
These files cease to exist when the first process finishes. Block and character files define devices.

Every file has a set of permissions (called access modes) that determine who can read, modify, or
execute the file.

Q.5 Describes the file authentication process in Linux operating system.

Linux File Permissions

The file permissions can be seen by using the ls command with the -l (long listing option)
as shown below

ls -l

total 0
-rwxr-xr-x 1 stewart stewart 0 2009-01-30 17:00 executable

-r--r--r-- 1 stewart stewart 0 2009-01-30 16:58 read-only-all.txt

-rw-rw-r-- 1 stewart engineers 0 2009-01-30 16:59 read-write-group.txt

From the ls output the file permissions can be seen at the left.
These are the first 10 characters of the file entry. The first character relates to the file type
then the remaining are in 3 groups of 3 characters relating to the different access types.

These permissions are applied to (left to right)


user - the owner of the file
group - a group of people, e.g. a project team or department
others - anyone else that has a login to the computer

These are then split into 3 different permissions, that of being able to:

read - Look at the contents of a file / find out what files are in a directory

write - Change or delete the contents of a file / create or remove files in a directory

execute - Can execute (run as a program) a file / can change to the directory or copy from
the directory.

These are laid out as follows (note these are the first 10 characters of the ls -l display):
Access permission layout

If the entry is filled in then it is in affect. If it is dashed out '-' then it does not apply.

There are also further permissions that can be set, however these are more advanced and
are explained later. Also note that root can override most of the permissions.

Changing File Permissions (chmod)

Assuming that you are either the owner of the file or root it is possible for you to change
the permissions of a file to either add or remove permissions. This is done using the
chmod (change mode) command.

The chmod command can be used in one of two ways. The Symbolic Format or the octal
format. Symbolic is useful for new users as it is easier to use, however if effort is made to
understand the octal format then it can be a powerful and quick way of changing file
permissions.

The basic format of the command is:

chmod mode filename

It is only the format of the mode parameter that is different when using the different
permission formats.

In symbolic format permissions are added or deleted using the following symbols
u = owner of the file (user)

g = groups owner (group)

o = anyone else on the system (other)

+ = add permission

- = remove permission

r = read permission

w = write permission
x = execute permission

For example to add write access to the group the following command is used:

chmod g+w file1

In Octal format the mode is based upon a octal number representing the different mode
permissions, where each of the permission groups (user, group, others) has an octal value
representing the read, write and execute bits. This requires a little bit of knowledge on binary or
octal number bases. The format is actually octal (but this can be likened to 3 separate binary to
decimal conversions for each of the user/group/all permissions). The main benefits of using octal
format is that all the permissions are set at the same time and the command is much shorter than if
all the permissions were set using the symbolic format.

User Group Others

Symbolic rwx rw- r--

Binary 111 110 100

4+2+1 4+2+0 4+0+0

Octal 7 6 4

The above file would have the octal number 764 and would therefore be changed using
the command

chmod 764 file1

An alternative way of working out the octal values is to add the following numbers
depending upon the permission required.
Read = 4
Write = 2
Execute = 1
Therefore if you wanted to set read to yes, write to no and execute to yes, this would be
4+1=5

Unit 5:

Short Answers: (2 Marks Each)

Q. 1 Describe component of Linux operating system.

Components of Linux

In the above section, we have studied about the introduction to Linux so now we are going to learn the
components of Linux. As Linux architecture primarily has these components: Hardware, Kernel, Shell
and Utilities:

 Hardware: Peripheral devices such as RAM, HDD, CPU together constitute Hardware layer
for the LINUX operating system.
 Kernel: The Core part of the Linux OS is called Kernel, it is responsible for many activities of
the LINUX operating system. It interacts directly with hardware, which provides low-level
services like providing hardware details to the system. We have two types of kernels –
Monolithic Kernel and MicroKernel
 Shell: The shell is an interface between the user and the kernel, it hides the complexity of
functions of the kernel from the user. It accepts commands from the user and performs the
action.
 Utilities: Operating system functions are granted to the user from the Utilities. Individual and
specialized functions are can be utilized from the System utilities.
Q. 2 What do mean by Real Time? Discuss Hard real time and soft real time.

Real time system means that the system is subjected to real time, i.e., response should be guaranteed
within a specified timing constraint or system should meet the specified deadline. For example: flight
control system, real time monitors etc.

Types of real time systems based on timing constraints:

1. Hard real time system –

This type of sytem can never miss its deadline. Missing the deadline may have disastrous
consequences.The usefulness of result produced by a hard real time system decreases abruptly
and may become negative if tardiness increases. Tardiness means how late a real time system
completes its task with respect to its deadline. Example: Flight controller system.

2. Soft real time system –

This type of system can miss its deadline occasionally with some acceptably low probability.
Missing the deadline have no disastrous consequences. The usefulness of result produced by a
soft real time system decreases gradually with increase in tardiness. Example: Telephone
switches.

Q. 3 What is RTOS?

A real-time operating system (RTOS) is an operating system (OS) intended to serve real-
time applications that process data as it comes in, typically without buffer delays. Processing time
requirements (including any OS delay) are measured in tenths of seconds or shorter increments of
time. A real-time system is a time bound system which has well defined fixed time constraints.
Processing must be done within the defined constraints or the system will fail. They either are event
driven or time sharing. Event driven systems switch between tasks based on their priorities while
time sharing systems switch the task based on clock interrupts. Most RTOSs use a pre-
emptive scheduling algorithm.

Q. 4 Discuss different types of mobile OS.

Popular Mobile Operating Systems

 Android OS (Google Inc.) ...


 Bada (Samsung Electronics) ...
 BlackBerry OS (Research In Motion) ...
 iPhone OS / iOS (Apple) ...
 MeeGo OS (Nokia and Intel) ...
 Palm OS (Garnet OS) ...
 Symbian OS (Nokia) ...
 webOS (Palm/HP)

Q.5 Discuss similarities and differences Between UNIX And Linux.

Linux Unix

Cost Linux can be freely distributed, downloaded freely, Different flavors of Unix
distributed through magazines, Books etc. There are have different cost
priced versions for Linux also, but they are structures according to
normally cheaper than Windows. vendors
Development Linux is developed by Open Source development Unix systems are divided
and i.e. through sharing and collaboration of code and into various other flavors,
Distribution features through forums etc and it is distributed by mostly developed by
various vendors. AT&T as well as various
commercial vendors and
non-profit organizations.

Manufacturer Linux kernel is developed by the community. Linus Three bigest distributions
Torvalds oversees things. are Solaris (Oracle), AIX
(IBM) & HP-UX Hewlett
Packard. And Apple
Makes OSX, an unix
based os..

User Everyone. From home users to developers and Unix operating systems
computer enthusiasts alike. were developed mainly
for mainframes, servers
and workstations except
OSX, Which is designed
for everyone. The Unix
environment and the
client-server program
model were essential
elements in the
development of the
Internet

Usage Linux can be installed on a wide variety of The UNIX operating


computer hardware, ranging from mobile phones, system is used in internet
tablet computers and video game consoles, to servers, workstations &
mainframes and supercomputers. PCs. Backbone of the
majority of finance
infastructure and many
24x365 high availability
solutions.

File system Ext2, Ext3, Ext4, Jfs, ReiserFS, Xfs, Btrfs, jfs, gpfs, hfs, hfs+, ufs,
support FAT, FAT32, NTFS xfs, zfs format

Text mode BASH (Bourne Again SHell) is the Linux default Originally the Bourne
interface shell. It can support multiple command interpreters. Shell. Now it's
compatible with many
others including BASH,
Korn & C.

What is it? Linux is an example of Open Source software Unix is an operating


development and Free Operating System (OS). system that is very
popular in universities,
companies, big
enterprises etc.

GUI Linux typically provides two GUIs, KDE and Initially Unix was a
Gnome. But there are millions of alternatives such command based OS, but
as LXDE, Xfce, Unity, Mate, twm, ect. later a GUI was created
called Common Desktop
Environment. Most
distributions now ship
with Gnome.

Price Free but support is available for a price. Some free for
development use (Solaris)
but support is available
for a price.

Security Linux has had about 60-100 viruses listed till date. A rough estimate of
None of them actively spreading nowadays. UNIX viruses is between
85 -120 viruses reported
till date.

Threat In case of Linux, threat detection and solution is Because of the


detection and very fast, as Linux is mainly community driven and proprietary nature of the
solution whenever any Linux user posts any kind of threat, original Unix, users have
several developers start working on it from different to wait for a while, to get
parts of the world the proper bug fixing
patch. But these are not
as common.

Processors Dozens of different kinds. x86/x64, Sparc, Power,


Itanium, PA-RISC,
PowerPC and many
others.

Examples Ubuntu, Fedora, Red Hat, Debian, Archlinux, OS X, Solaris, All Linux
Android etc.

Architectures Originally developed for Intel's x86 hardware, ports is available on PA-RISC
available for over two dozen CPU types including and Itanium machines.
ARM Solaris also available for
x86/x64 based
systems.OSX is
PowerPC(10.0-
10.5)/x86(10.4)/x64(10.5-
10.8)

Inception Inspired by MINIX (a Unix-like system) and In 1969, it was developed


eventually after adding many features of GUI, by a group of AT&T
Drivers etc, Linus Torvalds developed the employees at Bell Labs
framework of the OS that became LINUX in 1992. and Dennis Ritchie. It
The LINUX kernel was released on 17th was written in “C”
September, 1991 language and was
designed to be a portable,
multi-tasking and multi-
user system in a time-
sharing configuration.

Descriptive Answers: (5 to 20 Marks)

Q. 1 How process in manage in Linux system? Explain in details.

Process Management

A process is the basic context within which all user-requested activity is serviced within the
operating system. To be compatible with other UNIX systems, Linux must necessarily use a process
model similar to those of other UNIXes.

There are a few key places where Linux does things a little differently, however. In this section, we
review the traditional UNIX process model from Section 21.3.2, and introduce Linux's own
threading model.

The Fork/Exec Process Model

The basic principle of UNIX process management is to separate out into two distinct operations: the
creation of processes and the running of a new program. A new process is created by the fork
system call, and a new program is run after a call to execve. These are two distinctly separate
functions. A new process may be created with fork without a new program being run—the new
subprocess simply continues to execute exactly the same program that the first, parent

process was running. Equally, running a new program does not require that a new process is created
first: any process may call execve at any time. The currently running program is immediately
terminated, and the new program starts executing in the context of the existing process.
This model has the advantage of great simplicity. Rather than having to specify every detail of the
environment of a new program in the system call that runs that program, new programs simply run
in their existing environment. If

a parent process wishes to modify the environment in which a new program is to be run, it can fork
and then, still running the original program in a child process, make any system calls it requires to
modify that child process before

finally executing the new program. Under UNIX, then, a process encompasses all the information
that the operating system must maintain to track the context of a single execution of a single
program. Under Linux, we can break down this context into a number of specific sections. Broadly,
process properties fall into three groups: the process's identity, environment, and context.

Process Identity

A process's identity consists mainly of the following items:

Process ID (PID). Each process has a unique identifier. PIDs are used to specify processes to the
operating system when an application makes a system call to signal, modify, or wait for another
process. Additional identifiers associate the process with a process group (typically, a tree of
processes forked by a single user command) and login session.

Credentials. Each process must have an associated user ID and one or more group IDs (user
groups are discussed in Section 10.4.2; process groups are not) that determine the process's rights to
access system resources and files. Personality. Process personalities are not traditionally found on
UNIX systems, but under Linux each process has an associated personality identifier that can'
modify slightly the semantics of certain system calls. Personalities are primarily used by emulation
libraries to request that system calls be compatible with certain specific flavors of UNIX
Q. 2 Write short notes on memory management in Linux system.

There are two components to memory management under Linux. First, the

physical memory-management system deals with allocating and freeing pages,

groups of pages, and small blocks of memory. The second component handles virtual memory,
which is memory mapped into the address space of running

processes.

Management of Physical Memory

The primary physical memory manager in the Linux kernel is the page allocator.This allocator is
responsible for allocating and freeing all physical pages, and is capable of allocating ranges of
physically contiguous pages on request. The allocator uses a buddy-heap algorithm to keep track of
available physical pages.

A buddy-heap allocator pairs adjacent units of allocatable memory together;hence its name. Each
allocatable memory region has an adjacent partner/orbuddy, and whenever two allocated partner
regions are both freed, up, they are combined to form a larger region. That larger region also has a
partner, with which it can combine to form still larger free regions. Alternatively, if a small memory
request cannot be satisfied by allocation of an existing small free region, then a larger free region
will be subdivided into two partners to satisfy the request. Separate linked lists are used to record
the free memory regions of each allowable size; under Linux, the smallest size allocatable under
this mechanism is a single physical page. Figure shows an example of buddy-heap allocation: A 4
kilobytes region is being allocated, but the smallest available region is 16 kilobytes. The region is
broken up recursively until a piece of the desired size is available.
Q. 3 What are the advantage and disadvantage of writing an operating system in a high level
language, such as C?

There are many advantages to writing an operating system in a highlevel language such as C. First,
by programming at a higher abstraction, the number of programming errors is reduced as the code
becomes more compact. Second, many high-level languages provide advanced features such as
bounds checking that further minimize programming errors and security loopholes. Also, high-
level programming languages have powerful programming environments that include tools such as
debuggers and performance profilers that could be handy for developing code. The disadvantage
with using a high-level language is that the programmer is distanced from the underlying machine,
which could cause a few problems. First, there could be a performance overhead introduced by the
compiler and run-time system used for the high-level language. Second, certain operations and
instructions that are available at the machine level might not be accessible from the language level,
thereby limiting some of the functionality available to the programmer

Q. 4Would you classify Linux thread as user level threads or as kernel level threads? Support your
answer with appropriate arguments.

Linux threads are kernel-level threads. The threads are visible to the kernel and are
independently schedule-able. User-level threads, on the other hand, are not visible to the kernel
and are instead manipulated by user-level schedulers. In addition, the threads used in the Linux
kernel are used to support both the thread abstraction and the process abstraction. A new process is
created by simply associating a newly created kernel thread with a distinct address space, whereas
a new thread is created by simply creating a new kernel thread with the same address space. This
further indicates that the thread abstaction is intimately tied into the kernel.

Q.5 Explain the architecture and application of Mobile Operating system.

Mobile phones are the most popular device for communication today. Every mobile requires some
type of mobile operating system as a platform to run the other services and being easy for the users
to use the services like voice calling, messaging service, camera functionality, Internet facilities and
so on.

The previous mobile operating systems were simple and were unable to provide an effective
interface, therefore the capabilities of the phones they supported were limited. However, modern
smartphones are laced with most advanced features of a full-fledged computer which includes high-
speed central processing units (CPU) and graphics processing unit (GPU), large storage space,
multitasking, high-resolution screens and cameras with clarity, multipurpose communication
hardware and so on.

You might also like