Professional Documents
Culture Documents
2)What is system call? Explain steps for system call execution [07]
Ans: System calls, which are allowed for managing directories exhibit more variation
from system to system.
Subject name: Operating System and Virtulization Subject code:3141601
▪ The interface between the operating system and the user programs is defined by the set of
system calls that the operating system provides.
▪ The system calls available in the interface vary from operating system to operating system.
▪ Any single-CPU computer can execute only one instruction at a time.
▪ If a process is running a user program in user mode and needs a system service, such as reading
data from a file, it has to execute a trap or system call instruction to transfer control to the
operating system.
▪ The operating system then figures out what the calling process wants by inspecting the
parameters.
▪ Then it carries out the system call and returns control to the instruction following the system
call.
▪ Following steps describe how a system call is handled by an operating system.
▪ To understand how OS handles system calls, let us take an example of read system call.
▪ Read system call has three parameters: the first one specifying the file, the second one pointing
to the buffer, and the third one giving the number of bytes to read.
▪ Like nearly all system calls, it is invoked from C programs by calling a library procedure with
the same name as the system call: read.
▪ A call from a C program might look like this:
▪ count = read(fd, buffer, nbytes);
▪ The system call return the number of bytes actually read in count.
▪ This value is normally the same as nbytes, but may be smaller, if, for example, end-of- file is
encountered while reading.
▪ If the system call cannot be carried out, either due to an invalid parameter or a disk error, count
is set to -1, and the error number is put in a global variable, errno.
▪ Programs should always check the results of a system call to see if an error occurred.
▪ System calls are performed in a series of steps.
▪ To make this concept clearer, let us examine the read call discussed above.
▪ In preparation for calling the read library procedure, which actually makes the read system call,
the calling program first pushes the parameters onto the stack, as shown in steps 1-3
▪ The first and third parameters are called by value, but the second parameter is passed by
reference, meaning that the address of the buffer (indicated by &) is passed, not the contents
of the buffer.
▪ Then comes the actual call to the library procedure (step 4). This instruction is the normal
procedure call instruction used to call all procedures.
Subject name: Operating System and Virtulization Subject code:3141601
▪ The library procedure, possibly written in assembly language, typically puts the system call
number in a place where the operating system expects it, such as a register (step 5).
▪ Then it executes a TRAP instruction to switch from user mode to kernel mode and start
execution at a fixed address within the kernel (step 6).
▪ The kernel code that starts examines the system call number and then dispatches to the correct
system call handler, usually via a table of pointers to system call handlers indexed on system
call number (step 7).
▪ At that point the system call handler runs (step 8).
▪ Once the system call handler has completed its work, control may be returned to the user-space
library procedure at the instruction following the TRAP instruction (step 9).
▪ This procedure then returns to the user program in the usual way procedure calls return (step
10).
▪ To finish the job, the user program has to clean up the stack, as it does after any procedure call
(step 11).
▪ A process also includes the process stack, which contains temporary data (such as local
variables, function parameters, return address), and a data section, which contains global
variables and a heap-memory allocated to a process to run and process state that defines its
current state.
Subject name: Operating System and Virtulization Subject code:3141601
▪ A process changes its state during its execution. Each process may be in one of the following
states:
Only one process can be in running state on any processor at a time while multiple processes
may be in ready and waiting state. The process state diagram shown below describes
different process states during its lifetime.
▪ Operating system maintains one special data structure called Process Control Block (PCB).
▪ All the information about each process is stored in the process control block (PCB) which is
maintained by operating system. It contains following information associated with a specific
process.
▪ Process state: It represents current status of the process. It may be new, ready, running or
waiting.
▪ Program counter: It indicates the address of the next instruction to be executed for this
process.
▪ CPU Registers: They include index registers, stack pointer and general purpose registers. It is
used to save process state when an interrupt occurs, so that it can resume from that state.
▪ CPU-scheduling information: it includes process priority, pointer to scheduling queue.
▪ Memory management information: value of the base and limit registers, page tables
depending on the memory system.
▪ Accounting information: it contains an amount of CPU and real time used, time limits process
number and so on.
▪ I/O status information: It includes a list of I/O devices allocated to the process, a list of open
files and so on.
Subject name: Operating System and Virtulization Subject code:3141601
4)Difference between process and thread. Explain the features of Time sharing
system[07]
Ans:
COMPARISON
it.
Memory sharing Completely isolated and do not share Shares memory with each
memory. other.
consumption
communication.
creation
time
termination
Subject name: Operating System and Virtulization Subject code:3141601
1. Throughput is the amount of work completed in a unit of time. In other words throughput is the
processes executed to number of jobs completed in a unit of time. The scheduling algorithm must
look to maximize the number of jobs processed per time unit.
2. How much time processes spend in the ready queue waiting their turn to get on the CPU. ( Load
average - The average number of processes sitting in the ready queue waiting their turn to get into
the CPU.)
3. Turnaround time (TAT) is the time interval from the time of submission of a process to
the time of the completion of the process. It can also be considered as the sum of the time periods
spent waiting to get into memory or ready queue, execution on CPU and executing input/output
4. Response Time Response time is the difference between first execution time and Arrival time.
The time taken by the system to respond to an input and display of required updated information
is termed as the response time. The time when a job or process is completed
5. In parallel computing, granularity (or grain size) of a task is a measure of the amount of work
(or computation) which is performed by that task. Another definition of granularity takes into
account the communication overhead between multiple processors or processing elements
6. The short-term scheduler (also known as the CPU scheduler) decides which of the ready, in-
memory processes is to be executed (allocated a CPU) after a clock interrupt, an I/O interrupt,
an operating system call or another form of signal
7. CPU utilization refers to a computer's usage of processing resources, or the amount of work
handled by a CPU. Actual CPU utilization varies depending on the amount and type of managed
computing tasks. Certain tasks require heavy CPU time, while others require less because of non-
CPU resource requirements.
Ans:
Subject name: Operating System and Virtulization Subject code:3141601
Swapping
▪ In practice total amount of memory needed by all the processes is often much more than the
available memory.
▪ Swapping is used to deal with memory overhead.
▪ Swapping consists of bringing in each process in its entirety, running it for a while, then putting
it back on the disk.
▪ The event of copying process from hard disk to main memory is called as Swapped- in.
▪ The event of copying process from main memory to hard disk is called as Swapped- out.
▪ When swapping creates multiple holes in memory, it is possible to combine them all into one
big one by moving all the processes downward as far as possible. This technique is called as
memory compaction.
▪ Two ways to implement Swapping System
▪ Multiprogramming with Fixed partitions.
▪ Multiprogramming with dynamic partitions.
▪ Fragmentation occurs in a dynamic memory allocation system when many of the free blocks
are too small to satisfy any request.
▪ RAM Fragmentation
▪ Fragmentation can also refer to RAM that has small, unused holes scattered throughout it. This
is called external fragmentation. With modern operating systems that use a paging scheme, a
more common type of RAM fragmentation is internal fragmentation.This occurs when
memory is allocated in frames and the frame size is larger than the amount of memory
requested.
▪ External Fragmentation: External Fragmentation happens when a dynamic memory
allocation algorithm allocates some memory and a small piece is left over that cannot be
effectively used. If too much external fragmentation occurs, the amount of usable memory is
drastically reduced. Total memory space exists to satisfy a request, but it is not contiguous.
▪ Internal Fragmentation: Internal fragmentation is the space wasted inside of allocated
memory blocks because of restriction on the allowed sizes of allocated blocks. Allocated
memory may be slightly larger than requested memory; this size difference is memory internal
to a partition, but not being used
7) Explain Thread Life Cycle with diagram. Explain Distributed OS with neat sketch
and give its pros and cons.[07]
Ans:
1. New state ? After the creations of Thread instance the thread is in this state but before
the start() method invocation. At this point, the thread is considered not alive.
2. Runnable (Ready-to-run) state ? A thread start its life from Runnable state. A thread
first enters runnable state after the invoking of start() method but a thread can return to
this state after either running, waiting, sleeping or coming back from blocked state also.
On this state a thread is waiting for a turn on the processor.
3. Running state ? A thread is in running state that means the thread is currently
executing. There are several ways to enter in Runnable state but there is only one way
to enter in Running state: the scheduler select a thread from runnable pool.
Subject name: Operating System and Virtulization Subject code:3141601
4. Dead state ? A thread can be considered dead when its run() method completes. If any
thread comes on this state that means it cannot ever run again.
5. Blocked - A thread can enter in this state because of waiting the resources that are hold
by another thread.
Distributed os:
Operating system is developed to ease people daily life. For user benefits and needs the
operating system may be single user or distributed. In distributed systems, many computers
connected to each other and share their resources with each other.
• Bandwidth is another problem if there is large data then all network wires to be replaced
which tends to become expensive
• Overloading is another problem in distributed operating systems
• If there is a database connected on local system and many users accessing that database
through remote or distributed way then performance become slow
When a file is used, information is read and accessed into computer memory and there are
several ways to access this information of the file. Some systems provide only one access
method for files. Other systems, such as those of IBM, support many access methods, and
choosing the right one for a particular application is a major design problem.
There are three ways to access a file into a computer system: Sequential-Access, Direct
Access, Index sequential Method.
1. Sequential Access –
It is the simplest access method. Information in the file is processed in order, one record
after the other. This mode of access is by far the most common; for example, editor and
compiler usually access the file in this fashion.
Read and write make up the bulk of the operation on a file. A read operation -read
next- read the next position of the file and automatically advance a file pointer, which
keeps track I/O location. Similarly, for the writewrite next append to the end of the file
and advance to the newly written material.
1. Key points:
• Data is accessed one record right after another record in an order.
• When we use read command, it move ahead pointer by one
• When we use write command, it will allocate memory and move the pointer to the
end of the file
• Such a method is reasonable for tape.
2. Direct Access –
Another method is direct access method also known as relative access method. A filed-
length logical record that allows the program to read and write record rapidly. in no
particular order. The direct access is based on the disk model of a file since disk allows
random access to any file block. For direct access, the file is viewed as a numbered
sequence of block or record. Thus, we may read block 14 then block 59 and then we can
write block 17. There is no restriction on the order of reading and writing for a direct
access file.
A block number provided by the user to the operating system is normally a relative
block number, the first relative block of the file is 0 and then 1 and so on.
back of a book, contains the pointer to the various blocks. To find a record in the file,
we first search the index and then by the help of pointer we access the file directly.
Key points:
• It is built on top of Sequential access.
• It control the pointer by using index.
Device Independence:
• It should be possible to write programs that can access any I/O devices
without having to specify device in advance.
• For example, a program that reads a file as input should be able to read a file
on a floppy disk, on a hard disk, or on a CD-ROM, without having to modify
the program for each different device.
Uniform naming:
• Name of file or device should be some specific string or number. It must
not depend upon device in any way.
• In UNIX, all disks can be integrated in file system hierarchy in arbitrary
way so user need not be aware of which name corresponds to which device.
• All files and devices are addressed the same way: by a path name.
Error handling:
• Error should be handled as close to hardware as possible. If any controller
generates error then it tries to solve that error itself. If controller can’t solve
that error then device driver should handle that error, perhaps by reading all
blocks again.
• Many times when error occur, error solve in lower layer. If lower layer are
not able to handle error problem should be told to upper layer.
• In many cases error recovery can be done at a lower layer without the upper
layers even knowing about error.
Buffering:
• Data comes in main memory cannot be stored directly. For example data packets
come from the network cannot be directly stored in physical memory. Packets have to
be put into output buffer for examining them.
Direct Memory Access.
• CPU needs to address the device controllers to exchange data with them.
• CPU can request data from an I/O controller one byte at a time, which
is wastage of time.
• So a different scheme called DMA (Direct Memory Access) is used.
The operating system can only use DMA if the hardware has DMA
controller.
• A DMA controller is available for regulating transfers to multiple devices.
• The DMA controller has separate access to the system bus independent
to CPU as shown in figure 6-2. It contains several registers that can be
written and read by CPU.
• These registers includes memory address register, a byte count register,
one or more control registers.
Disadvantages of DMA:
Generally the CPU is much faster than the DMA controller and can do the job
much faster so if there is no other work for it to do then CPU needs to wait for the
slower DMA.
10) What is Semaphore? Explain its properties along with drawbacks. Explain any
problem and solve it by Semaphore.[07]
Ans:
Semaphore:
A semaphore is a variable that provides an abstraction for controlling access of a
shared resource by multiple processes in a parallel programming environment.
Properties of Semaphores
Disadvantages of Semaphores
Some of the disadvantages of semaphores are as follows:
1. Semaphores are complicated so the wait and signal operations must be implemented in
the correct order to prevent deadlocks.
2. Semaphores are impractical for last scale use as their use leads to loss of modularity.
This happens because the wait and signal operations prevent the creation of a structured
layout for the system.
3. Semaphores may lead to a priority inversion where low priority processes may access
the critical section first and high priority processes later.
11) Which are the necessary conditions for Deadlock? Explain Deadlock recovery in brief.[07]
Ans:
Deadlock recovery
• Recovery through preemption
▪ In some cases it may be possible to temporarily take a resource away from
its current owner and give it to another process.
▪ The ability to take a resource away from a process, have another process
use it, and then give it back without the process noticing it is highly
dependent on the nature of the resource.
▪ Recovering this way is frequently difficult or impossible.
▪ Choosing the process to suspend depends largely on which ones have
resources that can easily be taken back.
▪ The checkpoint contains not only the memory image, but also the resource
state, that is, which resources are currently assigned to the process.
▪ When a deadlock is detected, it is easy to see which resources are needed.
▪ To do the recovery, a process that owns a needed resource is rolled back to
a point in time before it acquired some other resource by starting one of its
earlier checkpoints.
▪ In effect, the process is reset to an earlier moment when it did not have the
resource, which is now assigned to one of the deadlocked processes.
▪ If the restarted process tries to acquire the resource again, it will have to
wait until it becomes available.
In Operating System (Memory Management Technique : Paging), for each process page table
will be created, which will contain Page Table Entry (PTE). This PTE will contain information
like frame number (The address of main memory where we want to refer), and some other
useful bits (e.g., valid/invalid bit, dirty bit, protection bit etc). This page table entry (PTE) will
tell where in the main memory the actual page is residing.
Now the question is where to place the page table, such that overall access time (or reference
time) will be less.
The problem initially was to fast access the main memory content based on address generated
by CPU (i.e logical/virtual address). Initially, some people thought of using registers to store
page table, as they are high-speed memory so access time will be less.
Steps in TLB hit:
1. CPU generates virtual address.
2. It is checked in TLB (present).
3. Corresponding frame number is retrieved, which now tells where in the main memory
page lies.
Steps in Page miss:
1. CPU generates virtual address.
2. It is checked in TLB (not present).
Subject name: Operating System and Virtulization Subject code:3141601
3. Now the page number is matched to page table residing in main memory (assuming page
table contains all PTE).
4. Corresponding frame number is retrieved, which now tells where in the main memory
page lies.
5. The TLB is updated with new PTE (if space is not there, one of the replacement technique
comes into picture i.e either FIFO, LRU or MFU etc).
Virtual Memory:
Virtual Memory is a storage allocation scheme in which secondary memory can be addressed
as though it were part of main memory. The addresses a program may use to reference memory
are distinguished from the addresses the memory system uses to identify physical storage sites,
and program generated addresses are translated automatically to the corresponding machine
addresses.
The size of virtual storage is limited by the addressing scheme of the computer system and
amount of secondary memory is available not by the actual number of the main storage
locations.
It is a technique that is implemented using both hardware and software. It maps memory
addresses used by a program, called virtual addresses, into physical addresses in computer
memory.
1. All memory references within a process are logical addresses that are dynamically
translated into physical addresses at run time. This means that a process can be swapped
in and out of main memory such that it occupies different places in main memory at
different times during the course of execution.
2. A process may be broken into number of pieces and these pieces need not be continuously
located in the main memory during execution.
Subject name: Operating System and Virtulization Subject code:3141601
13) Explain thread implementation in user space with its advantages and
disadvantages[07]
Ans:
User level threads are supported above the kernel in user space and are managed without kernel
support.
Advantages
Disadvantages
• Many-to-one
• One-to-one
• Many-to-many
• Two-level
Subject name: Operating System and Virtulization Subject code:3141601
All models maps user-level threads to kernel-level threads. A kernel thread is similar to a
process in a non-threaded (single-threaded) system. The kernel thread is the unit of execution
that is scheduled by the kernel to execute on the CPU. The term virtual processor is often used
instead of kernel thread.
Many-to-one
In the many-to-one model all user level threads execute on the same kernel thread. The process
can only run one user-level thread at a time because there is only one kernel-level thread
associated with the process.
The kernel has no knowledge of user-level threads. From its perspective, a process is an opaque
black box that occasionally makes system calls.
One-to-one
In the one-to-one model every user-level thread execute on a separate kernel-level thread.
In this model the kernel must provide a system call for creating a new kernel thread.
Subject name: Operating System and Virtulization Subject code:3141601
Many-to-many
Two-level
The two-level model is similar to the many-to-many model but also allows for certain user-
level threads to be bound to a single kernel-level thread.
14) List the different file implementation methods and explain them in detail.[07]
Ans:
• Contiguous Allocation
• Linked List Allocation
• Linked List Allocation Using A Table In Memory
• I-nodes
Contiguous Allocation
▪ The simplest allocation scheme is to store each file as a contiguous run of disk block.
▪ We see an example of contiguous storage allocation in fig. 7-5.
▪ Here the first 40 disk blocks are shown, starting with block 0 on the left. Initially, the
▪ disk was empty.
Advantages
▪ First it is simple to implement because keeping track of where a file’s blocks are is reduced to
remembering two numbers: The disk address of the first block and the number of blocks in the
file.
▪ Second, the read performance is excellent because the entire file can be read from the disk in a
single operation. Only one seek is needed (to the first block), so data comes in at the full
bandwidth of the disk.
▪ Thus contiguous allocation is simple to implement and has high performance
Drawbacks
▪ There is one situation in which continuous allocation is feasible and in fact, widely used: on
CD-ROMs. Here all the file sizes are known in advance and will never change during use of
CD-ROM file system.
▪ Linked List Allocation
▪ Another method for storing files is to keep each one as a linked list of the disk blocks
▪ The first word of each block is used as a pointer to the next one. The rest of the block is for
data.
▪ Unlike contiguous allocation, every disk block can be used in this method. No space is lost to
disk fragmentation.
▪ It is sufficient for a directory entry to store only disk address of the first block, rest can be found
starting there.
Drawbacks
Some computer processes are very lengthy and time-consuming. To speed the same process,
a job with a similar type of needs are batched together and run as a group.
The user of a batch operating system never directly interacts with the computer. In this type
of OS, every user prepares his or her job on an offline device like a punch card and submit it
to the computer operator.
Real time OS
Subject name: Operating System and Virtulization Subject code:3141601
A real time operating system time interval to process and respond to inputs is very small.
Examples: Military Software Systems, Space Software Systems.
Distributed systems use many processors located in different machines to provide very fast
computation to its users.
Network Operating System runs on a server. It provides the capability to serve to manage
data, user, groups, security, application, and other networking functions.
Mobile OS
Mobile operating systems are those OS which is especially that are designed to power
smartphones, tablets, and wearables devices.
(i) Authentication
Authentication refers to identifying each user of the system and associating the executing
programs with those users. It is the responsibility of the Operating System to create a protection
system which ensures that a user who is running a particular program is authentic. Operating
Systems generally identifies/authenticates users using following three ways −
• Username / Password − User need to enter a registered username and password with
Operating system to login into the system.
• User card/key − User need to punch card in card slot, or enter key generated by key generator
in option provided by operating system to login into the system.
Subject name: Operating System and Virtulization Subject code:3141601
• User attribute - fingerprint/ eye retina pattern/ signature − User need to pass his/her
attribute via designated input device used by operating system to login into the system.
(ii) Mutual Exclusion
A mutual exclusion (mutex) is a program object that prevents simultaneous access to a shared
resource. This concept is used in concurrent programming with a critical section, a piece of
code in which processes or threads access a shared resource. Only one thread owns the mutex
at a time, thus a mutex with a unique name is created when a program starts. When a thread
holds a resource, it has to lock the mutex from other threads to prevent concurrent access of
the resource. Upon releasing the resource, the thread unlocks the mutex.
the same time. It acts as a lock and is the most basic synchronization tool. When a thread tries
to acquire a mutex, it gains the mutex if it is available, otherwise the thread is set to sleep
condition. Mutual exclusion reduces latency and busy-waits using queuing and context
switches. Mutex can be enforced at both the hardware and software levels
(iii) Monitor
The monitor is one of the ways to achieve Process synchronization. The monitor is supported
by programming languages to achieve mutual exclusion between processes. For example Java
Synchronized methods. Java provides wait() and notify() constructs.
1. It is the collection of condition variables and procedures combined together in a special kind
of module or a package.
2. The processes running outside the monitor can’t access the internal variable of the monitor but
can call procedures of the monitor.
3. Only one process at a time can execute code inside monitors.
(iv)Segmentation
The details about each segment are stored in a table called as segment table. Segment table is
stored in one (or many) of the segments.
18) Explain Context Switching. Discuss performance evaluation of FCFS (First Come
First Serve) & RR (Round Robin) scheduling[07]
Ans:
Switching the CPU to another process requires saving the state of the old process and
loading the saved state for the new process.
This task is known as a context switch.
The context of a process is represented in the PCB of a process; it includes the
Subject name: Operating System and Virtulization Subject code:3141601
• Advantages:
➢ One of the oldest, simplest, fairest and most widely used algorithms.
• Disadvantages:
➢ Context switch overhead is there.
➢ Determination of time quantum is too critical. If it is too short, it causes frequent context
switches and lowers CPU efficiency. If it is too long, it causes poor response for short
interactive process.
19) Explain the following allocation algorithms: 1) First-fit 2) Best-fit 3) Worst-fit[07]
Ans:
First Fit
In the first fit approach is to allocate the first free partition or hole large enough which can
accommodate the process. It finishes after finding the first suitable free partition.
Search Starts from the starting location of the memory.
First available hole that is large enough to hold the process is selected for allocation.
The hole is then broken up into two pieces, one for process and another for unused
memory.
Search time is smaller here.
Memory loss is higher, as very large hole may be selected for small process.
Here process of size 426k will not get any partition for allocation.
Advantage
Disadvantage
The remaining unused memory areas left after allocation become waste if it is too smaller.
Thus request for larger memory requirement cannot be accomplished.
Best Fit
The best fit deals with allocating the smallest free partition which meets the requirement of the
requesting process. This algorithm first searches the entire list of free partitions and considers
the smallest hole that is adequate. It then tries to find a hole which is close to actual process
size needed.
Entire memory is searched here.
The smallest hole, which is large enough to hold the process, is selected for
allocation.
Subject name: Operating System and Virtulization Subject code:3141601
Advantage
Memory utilization is much better than first fit as it searches the smallest free partition first
available.
Disadvantage
It is slower and may even tend to fill up memory with tiny useless holes.
Worst fit
In worst fit approach is to locate largest available free portion so that the portion left will be
big enough to be useful. It is the reverse of best fit.
Entire memory is searched here also. The largest hole, which is largest enough to
hold the process, is selected for allocation.
This algorithm can be used only with dynamic partitioning.
Here process of size 426k will not get any partition for allocation.
Advantage
Disadvantage
If a process requiring larger memory arrives at a later stage then it cannot be accommodated
as the largest hole is already split and occupied.
Subject name: Operating System and Virtulization Subject code:3141601
20) What is deadlock? List the conditions that lead to deadlock. How deadlock can be
prevented?[07]
Ans:
Here
• Process P1 holds resource R1 and waits for resource R2 which is held by process P2.
• Process P2 holds resource R2 and waits for resource R1 which is held by process P1.
• None of the two processes can complete and release their resource.
• Thus, both the processes keep waiting infinitely.
1. Mutual Exclusion-
By this condition,
• There must exist at least one resource in the system which can be used by only one process at
a time.
• If there exists no such resource, then deadlock will never occur.
• Printer is an example of a resource that can be used by only one process at a time.
Subject name: Operating System and Virtulization Subject code:3141601
By this condition,
• There must exist a process which holds some resource and waits for another resource held by
some other process.
3. No Preemption-
By this condition,
• Once the resource has been allocated to the process, it can not be preempted.
• It means resource can not be snatched forcefully from one process and given to the other
process.
• The process must release the resource voluntarily by itself.
4. Circular Wait-
By this condition,
• All the processes must wait for the resource in a cyclic manner where the last process waits for
the resource held by the first process.
Here,
• Process P1 waits for a resource held by process P2.
• Process P2 waits for a resource held by process P3.
• Process P3 waits for a resource held by process P4.
• Process P4 waits for a resource held by process P1.
Subject name: Operating System and Virtulization Subject code:3141601
Deadlock Prevention
• Deadlock can be prevented by attacking the one of the four conditions that leads
to deadlock.
1) Attacking the Mutual Exclusion Condition
o No deadlock if no resource is ever assigned exclusively to a single process.
o Some devices can be spooled such as printer, by spooling printer output; several
processes can generate output at the same time.
o Only the printer daemon process uses physical printer.
o Thus deadlock for printer can be eliminated.
o Not all devices can be spooled.
▪ Principle: Avoid assigning a resource when that is not absolutely necessary.
▪ Try to make sure that as few processes as possible actually claim the resource.
2) Attacking the Hold and Wait Condition
o Require processes to request all their resources before starting execution.
o A process is allowed to run if all resources it needed is available. Otherwise
nothing will be allocated and it will just wait.
o Problem with this strategy is that a process may not know required resources at
start of run.
o Resource will not be used optimally.
o It also ties up resources other processes could be using.
o Variation: A process must give up all resources before making a new request.
Process is then granted all prior resources as well as the new ones only if all
required resources are available.
o Problem: what if someone grabs the resources in the meantime how can the
processes save its state?
3) Attacking the No Preemption Condition
o This is not a possible option.
o When a process P0 request some resource R which is held by another process P1
then resource R is forcibly taken away from the process P1 and allocated to P0.
o Consider a process holds the printer, halfway through its job; taking the printer
away from this process without having any ill effect is not possible.
4) Attacking the Circular Wait Condition
o To provide a global numbering of all the resources.
o Now the rule is this: processes can request resources whenever they want to, but
all requests must be made in numerical order.
o A process need not acquire them all at once.
o Circular wait is prevented if a process holding resource n cannot wait for resource
m, if m > n.
o No way to complete a cycle.
Subject name: Operating System and Virtulization Subject code:3141601
• The process structure of MINIX 3 is shown in figure 1-5, with kernel call handler
labeled as Sys.
• The device driver for the clock is also in the kernel because the scheduler
interacts closely with it. All the other device drivers run as separate user
processes.
• Outside the kernel, the system is structured as three layers of processes all
running in user mode.
Subject name: Operating System and Virtulization Subject code:3141601
• The lowest layer contains the device driver. Since they run in user mode they do
not have access to the I/O port space and cannot issue I/O commands directly.
• Above driver is another user mode layer containing servers, which do most of
the work of an operating system.
• One interesting server is the reincarnation server, whose job is to check if the other
servers and drivers are functioning correctly. In the event that a faulty one is detected, it
is automatically replaced without any user intervention.
• All the user programs lie on the top layer.
22) What is thread? Explain thread Structure? And explain any one type of thread in
details.[07]
Ans:
Thread:
• A program has one or more locus of execution. Each execution is called a
thread of execution.
• In traditional operating systems, each process has an address space and a single
thread of execution.
• It is the smallest unit of processing that can be scheduled by an operating system.
• A thread is a single sequence stream within in a process. Because threads have
some of the properties of processes, they are sometimes called lightweight
process- es. In a process, threads allow multiple executions of streams.
Thread Structure
• Process is used to group resources together and threads are the entities
scheduled for execution on the CPU.
• The thread has a program counter that keeps track of which instruction to
execute next.
• It has registers, which holds its current working variables.
• It has a stack, which contains the execution history, with one frame for each
proce- dure called but not yet returned from.
• Although a thread must execute in some process, the thread and its process
are dif- ferent concepts and can be treated separately.
• What threads add to the process model is to allow multiple executions to take
place in the same process environment, to a large degree independent of one
another.
• Having multiple threads running in parallel in one process is similar to
having multiple processes running in parallel in one computer.
Subject name: Operating System and Virtulization Subject code:3141601
• In former case, the threads share an address space, open files, and other resources.
• In the latter case, process share physical memory, disks, printers and other resources.
• In Fig(a) we see three traditional processes. Each process has its own address space
and a single thread of control.
• In contrast, in Fig(b) we see a single process with three threads of control.
• Although in both cases we have three threads, in Fig(a) each of them operates in a
different address space, whereas in Fig. (b) all three of them share the same address
space.
• Like a traditional process (i.e., a process with only one thread), a thread can be in
any one of several states: running, blocked, ready, or terminated.
• When multithreading is present, processes normally start with a single thread
present. This thread has the ability to create new threads by calling a library
procedure thread_create.
• When a thread has finished its work, it can exit by calling a library procedure
thread_exit.
• One thread can wait for a (specific) thread to exit by calling a procedure
thread_join. This procedure blocks the calling thread until a (specific) thread has
exited.
• Another common thread call is thread_yield, which allows a thread to voluntarily
give up the CPU to let another thread run.
Subject name: Operating System and Virtulization Subject code:3141601
Types of thread
1. User Level Threads
2. Kernel Level Threads
User Level Threads
• User level threads are implemented in user level libraries, rather than via systems
calls.
• So thread switching does not need to call operating system and to cause interrupt
to the kernel.
• The kernel knows nothing about user level threads and manages them as if they
were single threaded processes.
• When threads are managed in user space, each process needs its own private
thread table to keep track of the threads in that process.
• This table keeps track only of the per-thread properties, such as each thread’s pro-
gram counter, stack pointer, registers, state, and so forth.
• The thread table is managed by the run-time system.
Advantages
▪ It can be implemented on an Operating System that does not support threads.
▪ A user level thread does not require modification to operating systems.
▪ Simple Representation: Each thread is represented simply by a PC, registers, stack
and a small control block, all stored in the user process address space.
▪ Simple Management: This simply means that creating a thread, switching between
threads and synchronization between threads can all be done without intervention
of the kernel.
▪ Fast and Efficient: Thread switching is not much more expensive than a pro-
cedure call.
▪ User-level threads also have other advantages. They allow each process to have its
Subject name: Operating System and Virtulization Subject code:3141601
Round Robin:
Selection Criteria:
Each selected process is assigned a time interval, called time quantum or time slice. Process
is allowed to run only for this time interval. Here, two things are possible: First, Process is
either blocked or terminated before the quantum has elapsed. In this case the CPU switching
is done and another process is scheduled to run. Second, Process needs CPU burst longer
than time quantum. In this case, process is running at the end of the time quantum. Now, it
will be preempted and moved to the end of the queue. CPU will be allocated to another
process. Here, length of time quantum is critical to determine.
Decision Mode:
Preemptive:
Implementation :
This strategy can be implemented by using circular FIFO queue. If any process comes, or
process releases CPU, or process is preempted. It is moved to the end of the queue. When CPU
becomes free, a process from the first position in a queue is selected to run.
Example :
Consider the following set of four processes. Their arrival time and time required to complete
the execution are given in the following table. All time values are in milliseconds. Consider
that time quantum is of 4 ms, and context switch overhead is of 1 ms.
P0 0 10
P1 1 6
Subject name: Operating System and Virtulization Subject code:3141601
P2 3 2
P3 5 4
• Gantt Chart :
P1 P2 P0 P3 P1 P0
0 4 5 9 10 12 13 17 18 22 23 25 26 28
At 4ms, process P0 completes its time quantum. So it preempted and another process P1 is
allowed to run. At 12 ms, process P2 voluntarily releases CPU, and another process is
selected to run. 1 ms is wasted on each context switch as overhead. This
procedure is repeated till all process completes their execution.
• Statistics:
context switching
Context Switching involves storing the context or state of a process so that it can be reloaded
when required and execution can be resumed from the same point as earlier. This is a feature
of a multitasking operating system and allows a single CPU to be shared by multiple processes.
A diagram that demonstrates context switching is as follows:
Subject name: Operating System and Virtulization Subject code:3141601
In the above diagram, initially Process 1 is running. Process 1 is switched out and Process 2 is
switched in because of an interrupt or a system call. Context switching involves saving the state
of Process 1 into PCB1 and loading the state of process 2 from PCB2. After some time again
a context switch occurs and Process 2 is switched out and Process 1 is switched in again. This
involves saving the state of Process 2 into PCB2 and loading the state of process 1 from PCB1.
Context Switching Triggers
There are three major triggers for context switching. These are given as follows:
• Multitasking: In a multitasking environment, a process is switched out of the CPU so
another process can be run. The state of the old process is saved and the state of the
new process is loaded. On a pre-emptive system, processes may be switched out by the
scheduler.
• Interrupt Handling: The hardware switches a part of the context when an interrupt
occurs. This happens automatically. Only some of the context is changed to minimize
the time required to handle the interrupt.
• User and Kernel Mode Switching: A context switch may take place when a transition
between the user mode and kernel mode is required in the operating system.
switching the CPU to another process requires saving the state of the old process and
loading the saved state for the new process.
• This task is known as a context switch.
• The context of a process is represented in the PCB of a process; it includes
the value of the CPU registers, the process state and memory-management
information.
• When a context switch occurs, the kernel saves the context of the old
process in its PCB and loads the saved context of the new process scheduled
to run.
• Context-switch time is pure overhead, because the system does no useful
work while switching.
• Its speed varies from machine to machine, depending on the memory speed,
the num- ber of registers that must be copied, and the existence of special
instructions.
Subject name: Operating System and Virtulization Subject code:3141601
SCAN:
• From the current position disk arm starts in up direction and moves
towards the end, serving all the pending requests until end.
• At that end arm direction is reversed (down) and moves towards the other
end serving the pending requests on the way.
• As per SCAN request will be satisfied in order: 11, 12, 16, 34, 36, 50, 9, 1
• Total cylinder movement: (12-11) + (16-12) + (34-16) +(36-34) +(50-36) + (50-9)
+ (9-1)= 88
25) Write a Shell Script to find largest among the 3 given number. What is RAID?
Explain in brief.[07]
Ans:
echo "Enter Num1"
read num1
echo "Enter Num2"
read num2
echo "Enter Num3"
read num3
Subject name: Operating System and Virtulization Subject code:3141601
RAID 0: This configuration has striping, but no redundancy of data. It offers the best
performance, but no fault tolerance.
RAID 1: Also known as disk mirroring, this configuration consists of at least two drives that
duplicate the storage of data. There is no striping. Read performance is improved since either
disk can be read at the same time. Write performance is the same as for single disk storage.
Subject name: Operating System and Virtulization Subject code:3141601
RAID 2: This configuration uses striping across disks, with some disks storing error checking
and correcting (ECC) information. It has no advantage over RAID 3 and is no longer used.
RAID 3: This technique uses striping and dedicates one drive to storing parity information.
The embedded ECC information is used to detect errors. Data recovery is accomplished by
calculating the exclusive OR (XOR) of the information recorded on the other drives. Since an
I/O operation addresses all the drives at the same time, RAID 3 cannot overlap I/O. For this
reason, RAID 3 is best for single-user systems with long record applications.
Subject name: Operating System and Virtulization Subject code:3141601
RAID 4: This level uses large stripes, which means you can read records from any single drive.
This allows you to use overlapped I/O for read operations. Since all write operations have to
update the parity drive, no I/O overlapping is possible. RAID 4 offers no advantage over RAID
5.
RAID 5: This level is based on block-level striping with parity. The parity information is
striped across each drive, allowing the array to function even if one drive were to fail. The
array's architecture allows read and write operations to span multiple drives. This results in
performance that is usually better than that of a single drive, but not as high as that of a RAID
0 array. RAID 5 requires at least three disks, but it is often recommended to use at least five
disks for performance reasons.
Subject name: Operating System and Virtulization Subject code:3141601
RAID 5 arrays are generally considered to be a poor choice for use on write-intensive systems
because of the performance impact associated with writing parity information. When a disk
does fail, it can take a long time to rebuild a RAID 5 array. Performance is usually degraded
during the rebuild time, and the array is vulnerable to an additional disk failure until the rebuild
is complete.
RAID 6: This technique is similar to RAID 5, but includes a second parity scheme that is
distributed across the drives in the array. The use of additional parity allows the array to
continue to function even if two disks fail simultaneously. However, this extra protection
comes at a cost. RAID 6 arrays have a higher cost per gigabyte (GB) and often have slower
write performance than RAID 5 arrays.
Subject name: Operating System and Virtulization Subject code:3141601
Description :-
-a Shows you all files, even files that are hidden (these files begin with a dot.)
-A List all files including the hidden files. However, does not display the working
directory (.) or the parent directory (..).
-d If an argument is a directory it only lists its name not its contents
-l Shows you huge amounts of information (permissions, owners, size, and when last
modified.)
-p Displays a slash ( / ) in front of all directories
-r Reverses the order of how the files are displayed
-R Includes the contents of subdirectories
Description :-
-A Show all.
-b Omits line numbers for blank space in the output.
-e A $ character will be printed at the end of each line prior to a new line.
-E Displays a $ (dollar sign) at the end of each line.
-n Line numbers for all the output lines.
-s If the output has multiple empty lines it replaces it with one empty line.
-T Displays the tab characters in the output.
Non-printing characters (with the exception of tabs, new-lines and form-feeds) are
-v
printed visibly.
Subject name: Operating System and Virtulization Subject code:3141601
3)ps:- It is used to report the process status. ps is the short name for Process Status.
Syntax:- ps [options]
Description :-
-a List information about all processes most frequently requested: all those except process
group leaders and processes not associated with a terminal
-A List information for all processes. Identical to -e, below
-f Generate a full listing
-j Print session ID and process group ID
-l Generate a long listing
6)suid:set user id
➢ suid (Set owner User ID up on execution) is a special type of file permissions
given to a file.
➢ Normally in Linux/Unix when a program runs, it inherits access permissions
from the logged in user.
➢ suid is defined as giving temporary permissions to a user to run a program/file
with the permissions of the file owner rather that the user who runs it.
➢ In simple words users will get file owner’s permissions as well as owner UID and
GID when executing a file/program/command.
7) finger:- finger command displays the user's login name, real name, terminal
name and write status (as a ''*'' after the terminal name if write permission is
Subject name: Operating System and Virtulization Subject code:3141601
denied), idle time, login time, office location and office phone number.
Syntax:- finger [username]
Description :-
-l Force long output format
-s Force short output format
Paging
• The program generated address is called as Virtual Addresses and form
the Virtual Address Space.
• Most virtual memory systems use a technique called paging.
• Virtual address space is divided into fixed-size partitions called pages.
• The corresponding units in the physical memory are called as page frames.
• The pages and page frames are always of the same size.
• Size of Virtual Address Space is greater than that of Main memory, so
instead of loading entire address space in to memory to run the process,
MMU copies only required pages into main memory.
• In order to keep the track of pages and page frames, OS maintains a data
structure called page table.
MMU(Memory Management Unit)-
The run time mapping between Virtual address and Physical Address is done by hardware
device known as MMU.
In memory management, Operating System will handle the processes and moves the
processes between disk and memory for execution . It keeps the track of available and used
memory.
Virtual and physical addresses are the same in compile-time and load-time address-binding
schemes. Virtual and physical addresses differ in execution-time address-binding scheme.
The set of all logical addresses generated by a program is referred to as a logical address
space. The set of all physical addresses corresponding to these logical addresses is referred to
as a physical address space.
The runtime mapping from virtual to physical address is done by the memory management
unit (MMU) which is a hardware device. MMU uses following mechanism to convert virtual
address to physical address.
• The value in the base register is added to every address generated by a user process,
which is treated as offset at the time it is sent to memory. For example, if the base
register value is 10000, then an attempt by the user to use address location 100 will be
dynamically reallocated to location 10100.
• The user program deals with virtual addresses; it never sees the real physical addresses.
Subject name: Operating System and Virtulization Subject code:3141601
• The Physical Address Space is conceptually divided into a number of fixed-size blocks,
called frames.
• The Logical address Space is also splitted into fixed-size blocks, called pages.
• Page Size = Frame Size
Let us consider an example:
• Physical Address = 12 bits, then Physical Address Space = 4 K words
• Logical Address = 13 bits, then Logical Address Space = 8 K words
• Page size = frame size = 1 K words (assumption)
28) What do you mean by mutual exclusion? Explain Peterson’s solution for mutual
exclusion problem[07]
Ans:
Mutual Exclusion:
Mutual exclusion implies that only one process can be inside the critical section at any time. If
any other processes require the critical section, they must wait until it is free.
It is a way of making sure that if one process is using a shared variable or file; the
other process will be excluded (stopped) from doing the same thing.
• No two processes may at the same moment inside their critical sections.
• No assumptions are made about relative speeds of processes or number of CPUs.
• No process should outside its critical section should block other processes.
• No process should wait arbitrary long to enter its critical section.
Peterson's Algorithm
CONCEPT:
• Both the turn variable and the status flags are used, as in Dekker's algorithm. After
setting our flag we immediately give away the turn.
• We then wait for the turn if and only if the other flag is set. By waiting on the and of
two conditions, we avoid the need to clear and reset the flags.
The use of virtual machines also comes with several important management considerations,
many of which can be addressed through general systems administration best practices and
tools that are designed to manage VMs. There are some risks to consolidation, including
overtaxing resources or potentially experiencing outages on multiple VMs due to one physical
hardware outage. While these cost savings increase as more virtual machines share the same
Subject name: Operating System and Virtulization Subject code:3141601
hardware platform, it does add risk. It is possible to place hundreds of virtual machines on the
same hardware, but if the hardware platform fails, it could take out dozens or hundreds of
virtual machines.
VM Uses
VMs have multiple uses, but in general they are deployed when the need for different operating
systems and processing power are needed for different applications running simultaneously.
For example, if an enterprise wants to test multiple web servers and small databases at the same
time. Similarly, if an enterprise wants to use the same server to run graphics-intensive gaming
software and customer service database.
ESXi is a type-1 hypervisor, meaning it runs directly on system hardware without the need for
an operating system (OS). Type-1 hypervisors are also referred to as bare-metal hypervisors
because they run directly on hardware.
ESXi's VMkernel interfaces directly with VMware agents and approved third-party modules.
Admins can configure VMware ESXi using its console or a vSphere client. They can also check
VMware's hardware compatibility list for approved, supported hardware on which to install
ESXi.
Microsoft Hiper-v
Subject name: Operating System and Virtulization Subject code:3141601
Hyper-V is Microsoft's hardware virtualization product. It lets you create and run a software
version of a computer, called a virtual machine. Each virtual machine acts like a complete
computer, running an operating system and programs. When you need computing resources,
virtual machines give you more flexibility, help save time and money, and are a more efficient
way to use hardware than just running one operating system on physical hardware.
Hyper-V runs each virtual machine in its own isolated space, which means you can run more
than one virtual machine on the same hardware at the same time. You might want to do this to
avoid problems such as a crash affecting the other workloads, or to give different people,
groups or services access to different systems.
Features of hiper-v
Computing environment - A Hyper-V virtual machine includes the same basic parts as a
physical computer, such as memory, processor, storage, and networking. All these parts have
features and options that you can configure different ways to meet different needs. Storage and
networking can each be considered categories of their own, because of the many ways you can
configure them.
Disaster recovery and backup - For disaster recovery, Hyper-V Replica creates copies of
virtual machines, intended to be stored in another physical location, so you can restore the
virtual machine from the copy. For backup, Hyper-V offers two types. One uses saved states
and the other uses Volume Shadow Copy Service (VSS) so you can make application-
consistent backups for programs that support VSS.
Optimization - Each supported guest operating system has a customized set of services and
drivers, called integration services, that make it easier to use the operating system in a Hyper-
V virtual machine.
Portability - Features such as live migration, storage migration, and import/export make it
easier to move or distribute a virtual machine.
Security - Secure boot and shielded virtual machines help protect against malware and other
unauthorized access to a virtual machine and its data.
Subject name: Operating System and Virtulization Subject code:3141601
Ans:
4 Less time taken to process the jobs. More Time taken to process the jobs.
•Program execution
• I/O operations
• File System manipulation
• Communication
• Error Detection
• Resource Allocation
• Protection
Program execution
Operating systems handle many kinds of activities from user programs to system programs
like printer spooler, name servers, file server, etc. Each of these activities is encapsulated as a
process.
I/O Operation
An I/O subsystem comprises of I/O devices and their corresponding driver software. Drivers
hide the peculiarities of specific hardware devices from the users.
File system manipulation
A file represents a collection of related information. Computers can store files on the disk
(secondary storage), for long-term storage purpose. Examples of storage media include
magnetic tape, magnetic disk and optical disk drives like CD, DVD. Each of these media has
its own properties like speed, capacity, data transfer rate and data access methods.
Subject name: Operating System and Virtulization Subject code:3141601
Communication
In case of distributed systems which are a collection of processors that do not share memory,
peripheral devices, or a clock, the operating system manages communications between all the
processes. Multiple processes communicate with one another through communication lines in
the network.
Error handling
Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices or in the
memory hardware. Following are the major activities of an operating system with respect to
error handling −
Ans:
A hypervisor installs a VM from the same ISO image you would download and use to install
an operating system directly onto an empty physical hard drive.
A container, on the other hand is, effectively, an application, launched from a script-like
template, that thinks it’s an operating system. In container technologies (like LXC and Docker),
containers are nothing more than software and resource (files, processes, users) abstractions
that rely on the host kernel and a representation of the “core four” hardware resources (i.e,
CPU, RAM, network and storage) for everything they do.
Subject name: Operating System and Virtulization Subject code:3141601
Of course, since containers are, effectively, isolated extensions of the host kernel, virtualizing
Windows (or even older or newer Linux releases running incompatible versions of libc) on,
say, an Ubuntu 16.04 host, is impossible. But the technology does allow for incredibly
lightweight and versatile compute opportunities.
Migration
The virtualization model also permits a very wide range of migration, backup, and cloning
operations — even from running systems (V2V). Since the software resources that define and
drive a virtual machine are so easily identified, it usually doesn’t take too much effort to
duplicate whole server environments in multiple locations and for multiple purposes.
Sometimes it’s no more complicated than creating an archive of a virtual file system on one
host, unpacking it within the same path on a different host, 7checking the basic network
settings, and firing it up. Most platforms, offer a single command line operation to move guests
between hosts.
Ans:
Android is one of the most common operating systems out there. It has proven to dominate the
smartphone market but is yet to get its way into the world of PCs. This is not to mean that you
cannot enjoy having an Android environment on your computer. You can do so using
virtualization software.
There are many reasons why you would want to have the latest Android 8.1 Oreo on your
computer. It could be that you are a developer and have decided to venture into Android apps.
You will need an emulator to test the apps you develop. You can use virtualization software to
create an Android device-like environment on which you can try out the application. Perhaps
you are just curious what the Oreo has to offer. You can find out by running it on a virtual
machine. Doing this should actually be the first step before any Android user upgrades their
Operating System.
An Android virtual machine can be created using various virtualization software solutions
available. There are many of them but only two have the very best features. These are
VirtualBox and VMware. Their free versions are feature-laden while their paid versions make
the impossible possible. Users get access to every feature of Android just like it works on a
phone. Developers will appreciate the fact that they can create different Android device-like
virtual machines so they can test apps on devices of different specifications. They will be able
to easily create virtual machines with different RAM, ROM, and other specs so as to determine
how the app will work on different Android phones.
The standard Java API and virtual machine are mainly designed for desktop as well as server
systems. They are not that compatible with mobile devices. Because of this, Google has created
a different API and virtual machine for mobile devices. This is known as the Dalvik virtual
machine.
The Dalvik virtual machine is a key component of the Android runtime and is a part of JVM
(Java Virtual Machine) developed specially for Android. The Dalvik virtual machine uses
features that are quite important in Java such as memory management, multi-threading etc. The
programs in Java are first converted into JVM and this is then interpreted into the DVM
bytecode.
Details about both the JVM and the DVM are given as follows:
Java Virtual Machine
The Java Virtual Machine is an application that provides the run-time environment to execute
the Java bytecode. It converts the bytecode into machine code. The Java Virtual Machine can
perform multiple operations like loading the code, verifying the code, executing the code,
providing run-time environment etc.
A diagram that illustrates the working of the Java Virtual Machine is given as follows:
Subject name: Operating System and Virtulization Subject code:3141601
Ans:
Contiguous Allocation
• The simplest allocation scheme is to store each file as a contiguous run of disk block.
• We see an example of contiguous storage allocation in fig. 7-5.
• Here the first 40 disk blocks are shown, starting with block 0 on the left. Initially, the
disk was empty.
Each file occupies a contiguous address space on disk.
Assigned disk address is in linear order.
Easy to implement.
External fragmentation is a major issue with this type of allocation technique.
Subject name: Operating System and Virtulization Subject code:3141601
• Advantages
▪ First it is simple to implement because keeping track of where a file’s blocks are is
reduced to remembering two numbers: The disk address of the first block and the
number of blocks in the file.
▪ Second, the read performance is excellent because the entire file can be read from
the disk in a single operation. Only one seek is needed (to the first block), so data
comes in at the full bandwidth of the disk.
▪ Thus contiguous allocation is simple to implement and has high performance.
Drawbacks
No external fragmentation
• Another method for storing files is to keep each one as a linked list of the disk blocks,
as shown in fig
• The first word of each block is used as a pointer to the next one. The rest of the block is
for data.
• Unlike contiguous allocation, every disk block can be used in this method. No space
is lost to disk fragmentation.
• It is sufficient for a directory entry to store only disk address of the first block, rest
can be found starting there.
Drawbacks
System Calls
Application developers often do not have direct access to the system calls, but can access them
through an application programming interface (API). The functions that are included in the
API invoke the actual system calls. By using the API, certain benefits can be gained:
Subject name: Operating System and Virtulization Subject code:3141601
• Portability: as long a system supports an API, any program using that API can compile and
run.
• Ease of Use: using the API can be significantly easier then using the actual system call.
Process Control
A running program needs to be able to stop execution either normally or abnormally. When
execution is stopped abnormally, often a dump of memory is taken and can be examined with
a debugger.
File Management
Some common system calls are create, delete, read, write, reposition, or close. Also, there is a
need to determine the file attributes – get and set file attribute. Many times the OS provides an
API to make these system calls.
Subject name: Operating System and Virtulization Subject code:3141601
Device Management
Process usually require several resources to execute, if these resources are available, they will
be granted and control returned to the user process. These resources are also thought of as
devices. Some are physical, such as a video card, and others are abstract, such as a file.
User programs request the device, and when finished they release the device. Similar to files,
we can read, write, and reposition the device.
Information Management
Some system calls exist purely for transferring information between the user program and the
operating system. An example of this is time, or date.
The OS also keeps information about all its processes and provides system calls to report this
information.
Communication
There are two models of interprocess communication, the message-passing model and the
shared memory model.
36)write about semaphores.write benefits of threads and difference between thread and
process.[07]
Ans:
Semaphore:
A semaphore is a variable that provides an abstraction for controlling
access of a shared resource by multiple processes in a parallel
programming environment.
There are 2 types of semaphores:
1. Binary semaphores: - Binary semaphores have 2 methods
associated with it (up, down / lock, unlock). Binary semaphores
can take only 2 values (0/1). They are used to acquire locks.
2. Counting semaphores: - Counting semaphore can have possible
values more than two.
Subject name: Operating System and Virtulization Subject code:3141601
1 Process is heavy weight or resource Thread is light weight, taking lesser resources than a
intensive. process.
2 Process switching needs interaction Thread switching does not need to interact with
with operating system. operating system.
3 In multiple processing environments, All threads can share same set of open files, child
each process executes the same code processes.
but has its own memory and file
resources.
4 If one process is blocked, then no other While one thread is blocked and waiting, a second
process can execute until the first thread in the same task can run.
process is unblocked.
5 Multiple processes without using Multiple threaded processes use fewer resources.
threads use more resources.
Thread benefits:
Ans:
As Banker’s algorithm using some kind of table like allocation, request, available all that thing
to understand what is the state of the system. Similarly, if you want to understand the state of
the system instead of using those table, actually tables are very easy to represent and understand
it, but then still you could even represent the same information in the graph. That graph is
called Resource Allocation Graph (RAG).
So, resource allocation graph is explained to us what is the state of the system in terms
of processes and resources. Like how many resources are available, how many are allocated
and what is the request of each process. Everything can be represented in terms of the
diagram. One of the advantages of having a diagram is, sometimes it is possible to see a
deadlock directly by using RAG, but then you might not be able to know that by looking at
the table. But the tables are better if the system contains lots of process and resource and
Graph is better if the system contains less number of process and resource.
We know that any graph contains vertices and edges. So RAG also contains vertices and
edges. In RAG vertices are two type –
1. Process vertex – Every process will be represented as a process vertex.Generally, the
process will be represented with a circle.
2. Resource vertex – Every resource will be represented as a resource vertex. It is also two
type –
• Single instance type resource – It represents as a box, inside the box, there will be one dot.So
the number of dots indicate how many instances are present of each resource type.
• Multi-resource instance type resource – It also represents as a box, inside the box, there will
be many dots present.
Now coming to the edges of RAG.There are two types of edges in RAG –
Subject name: Operating System and Virtulization Subject code:3141601
1. Assign Edge – If you already assign a resource to a process then it is called Assign edge.
2. Request Edge – It means in future the process might want some resource to complete the
execution, that is called request edge.
So, if a process is using a resource, an arrow is drawn from the resource node to the process
node. If a process is requesting a resource, an arrow is drawn from the process node to the
resource node.
Example 1 (Single instances RAG) –
If there is a cycle in the Resource Allocation Graph and each resource in the cycle provides
only one instance, then the processes will be in deadlock. For example, if process P1 holds
resource R1, process P2 holds resource R2 and process P1 is waiting for R2 and process P2 is
waiting for R1, then process P1 and process P2 will be in deadlock.
Subject name: Operating System and Virtulization Subject code:3141601
Ans:
o Uniform naming:
• Name of file or device should be some specific string or number. It must not
depend upon device in any way.
• In UNIX, all disks can be integrated in file system hierarchy in arbitrary way
so user need not be aware of which name corresponds to which device.
• All files and devices are addressed the same way: by a path name.
o Error handling:
• Error should be handled as close to hardware as possible. If any controller
generates error then it tries to solve that error itself. If controller can’t solve
that error then device driver should handle that error, perhaps by reading all
blocks again.
• Many times when error occur, error solve in lower layer. If lower layer are
not able to handle error problem should be told to upper layer.
• In many cases error recovery can be done at a lower layer without the upper
layers even knowing about error.
o Synchronous versus Asynchronous:
• Most of devices are asynchronous device. CPU starts transfer and goes off
to do something else until interrupt occurs. I/O Software needs to support
both the types of devices.
• User programs are much easier to write if the I/O operations are blocking.
• It is up to the operating system to make operations that are actually interrupt-
driven look blocking to the user programs.
Subject name: Operating System and Virtulization Subject code:3141601
o Buffering:
• Data comes in main memory cannot be stored directly. For example data
packets come from the network cannot be directly stored in physical
memory. Packets have to be put into output buffer for examining them.
• Some devices have several real-time constraints, so data must be put into output buffer
in advance to decouple the rate at which buffer is filled and the rate at which it is
emptied, in order to avoid buffer under runs.
• Buffering involved considerable copying and often has major impact on I/O
performance.
Ans:
The Linux Kernel is the heart of the operating system. Without the Kernel, we simply can not
perform any task, since it is mainly responsible for the software and hardware of our computer
working correctly and can interact with each other.
Kernel component code executes in a special privileged mode called kernel mode with full
access to all resources of the computer. This code represents a single process, executes in single
address space and do not require any context switch and hence is very efficient and fast. Kernel
runs each processes and provides system services to processes, provides protected access to
hardware to processes.
• At the lowest level it contains interrupt handlers which are the primary
way for interacting with the device, and low level dispatching
Subject name: Operating System and Virtulization Subject code:3141601
mechanism.
• At the highest level the I/O operations are all integrated under a virtual
file system and at lowest level, all I/O operations pass through some
device driver.
• All Linux drivers are classified as either a character device driver or
block device drivers, with the main difference that random accesses
are allowed on the block devices and not on the character devices.
• Technically network devices are really character devices, but they are
handled somewhat differently, so It is preferable to separate them.
• On the top of the disk drivers is the I/O scheduler who is responsible
for ordering and issuing disk operation request in a way that tries to
converse waste full disk head movement.
• Memory management tasks include maintaining the virtual to
physical memory mappings, maintaining a cache of the recently
accessed pages and implementing a good page replacement policy.
Kernel functions
The main functions of the Kernel are the following:
• Manage RAM memory, so that all programs and running processes can work.
• Manage the processor time, which is used by running processes.
• Manage access and use of the different peripherals connected to the computer.
Ans:
Computer software is roughly divided into two main categories - application software and
operating system software. Applications are programs used by people to carry out various tasks,
such as writing a letter, creating a financial spreadsheet, or querying a customer database.
Operating systems, on the other hand, manage the computer system on which these applications
run. Without an operating system, it would be impossible to run any of the application software
we use every day, or even to boot up the computer.
The early computers of the late 1940s had no operating system. Human operators scheduled
jobs for execution and supervised the use of the computer’s resources. Because these early
computers were very expensive, the main purpose of an operating system in these early days
was to make the hardware as efficient as possible. Now, computer hardware is relatively cheap
by comparison with the cost of the personnel required to operate it, so the purpose of the
operating system has evolved to encompass the task of making the user as efficient as possible.
Subject name: Operating System and Virtulization Subject code:3141601
An operating system functions in much the same way as other software. It is a collection of
programs that are loaded into memory and executed by the processor. When the computer is
powered down it exists only as a collection of files on a disk drive. The main difference is that,
once it is running, it has a large measure of control over both the processor itself and other
system resources. In fact, the operating system only relinquishes control to allow other
programs to execute. An application program is frequently given control of the processors for
short periods of time in order to carry out its allotted task, but control always reverts to the
operating system, which can then either use the processor itself or allocate it to another
program.
The operating system, then, controls the operation of the computer. This includes determining
which programs may use the processor at any given time, managing system resources such as
working memory and secondary storage, and controlling access to input and output devices. In
addition to controlling the system itself, the operating must provide an interface between the
system and the user which allows the user to interact with the system in an optimal manner.
Increasingly these days, the operating system provides sophisticated networking functionality,
and is expected to be compatible with a growing range of communication devices and other
peripherals. In recent years, the implementation of an application programming interface (API)
has been a feature of most operating systems, making the process of writing application
programs for those operating systems much easier, and creating a standardised application
environment.
Computer software is roughly divided into two main categories - application software and
operating system software. Applications are programs used by people to carry out various tasks,
such as writing a letter, creating a financial spreadsheet, or querying a customer database.
Subject name: Operating System and Virtulization Subject code:3141601
Operating systems, on the other hand, manage the computer system on which these applications
run. Without an operating system, it would be impossible to run any of the application software
we use every day, or even to boot up the computer.
An operating system functions in much the same way as other software. It is a collection of
programs that are loaded into memory and executed by the processor. When the computer is
powered down it exists only as a collection of files on a disk drive. The main difference is that,
once it is running, it has a large measure of control over both the processor itself and other
system resources. In fact, the operating system only relinquishes control to allow other
programs to execute. An application program is frequently given control of the processors for
short periods of time in order to carry out its allotted task, but control always reverts to the
operating system, which can then either use the processor itself or allocate it to another
program.
The operating system, then, controls the operation of the computer. This includes determining
which programs may use the processor at any given time, managing system resources such as
working memory and secondary storage, and controlling access to input and output devices. In
addition to controlling the system itself, the operating must provide an interface between the
system and the user which allows the user to interact with the system in an optimal manner.
Increasingly these days, the operating system provides sophisticated networking functionality,
and is expected to be compatible with a growing range of communication devices and other
peripherals. In recent years, the implementation of an application programming interface (API)
has been a feature of most operating systems, making the process of writing application
programs for those operating systems much easier, and creating a standardised application
environment.