You are on page 1of 68

Subject name: Operating System and Virtulization Subject code:3141601

Question Bank with answer


Subject Name: Operating System and Virtulization
Subject code: 3141601

1)What is operating system? Give the view of OS as a resource manager [07]


Ans: An operating system (OS) is a collection of software that manages computer
hardware resources and provides various services for computer programs. It acts
as an intermediary between the user of a computer and the computer hardware.

▪ The concept of an operating system as providing abstractions to application programs is a top


down view.
▪ Alternatively, bottom up view holds that the OS is there to manage all pieces of a complex
system.
▪ A computer consists of a set of resources such as processors, memories, timers, disks, printers
and many others.
▪ The Operating System manages these resources and allocates them to specific programs.
▪ As a resource manager, Operating system provides controlled allocation of the processors,
memories, I/O devices among various programs.
▪ Multiple user programs are running at the same time.
▪ The processor itself is a resource and the Operating System decides how much processor time
should be given for the execution of a particular user program.
▪ Operating system also manages memory and I/O devices when multiple users are working.
▪ The primary task of OS is to keep the track of which programs are using which resources, to
grant resource requests, to account for usage, and to resolve conflicting requests from different
programs and users.
▪ An Operating System is a control program. A control program controls the execution of user
programs to prevent errors and improper use of computer.
▪ Resource management includes multiplexing (sharing) resources in two ways: in time and in
space.
▪ When a resource is time multiplexed, different programs or users take turns using it. First one
of them gets to use the resource, then another, and so on.
▪ For example, CPU and printer are time multiplexed resources. OS decides who will use it and
for how long.
▪ The other kind of multiplexing is space multiplexing, instead of the customers taking turns,
each one gets part of the resource.
▪ For example, both primary and secondary memories are space multiplexed. OS allocates them
to user programs and keeps the track of it.

2)What is system call? Explain steps for system call execution [07]
Ans: System calls, which are allowed for managing directories exhibit more variation
from system to system.
Subject name: Operating System and Virtulization Subject code:3141601

▪ The interface between the operating system and the user programs is defined by the set of
system calls that the operating system provides.
▪ The system calls available in the interface vary from operating system to operating system.
▪ Any single-CPU computer can execute only one instruction at a time.
▪ If a process is running a user program in user mode and needs a system service, such as reading
data from a file, it has to execute a trap or system call instruction to transfer control to the
operating system.
▪ The operating system then figures out what the calling process wants by inspecting the
parameters.
▪ Then it carries out the system call and returns control to the instruction following the system
call.
▪ Following steps describe how a system call is handled by an operating system.
▪ To understand how OS handles system calls, let us take an example of read system call.
▪ Read system call has three parameters: the first one specifying the file, the second one pointing
to the buffer, and the third one giving the number of bytes to read.
▪ Like nearly all system calls, it is invoked from C programs by calling a library procedure with
the same name as the system call: read.
▪ A call from a C program might look like this:
▪ count = read(fd, buffer, nbytes);
▪ The system call return the number of bytes actually read in count.
▪ This value is normally the same as nbytes, but may be smaller, if, for example, end-of- file is
encountered while reading.
▪ If the system call cannot be carried out, either due to an invalid parameter or a disk error, count
is set to -1, and the error number is put in a global variable, errno.
▪ Programs should always check the results of a system call to see if an error occurred.
▪ System calls are performed in a series of steps.
▪ To make this concept clearer, let us examine the read call discussed above.
▪ In preparation for calling the read library procedure, which actually makes the read system call,
the calling program first pushes the parameters onto the stack, as shown in steps 1-3
▪ The first and third parameters are called by value, but the second parameter is passed by
reference, meaning that the address of the buffer (indicated by &) is passed, not the contents
of the buffer.
▪ Then comes the actual call to the library procedure (step 4). This instruction is the normal
procedure call instruction used to call all procedures.
Subject name: Operating System and Virtulization Subject code:3141601

▪ The library procedure, possibly written in assembly language, typically puts the system call
number in a place where the operating system expects it, such as a register (step 5).

▪ Then it executes a TRAP instruction to switch from user mode to kernel mode and start
execution at a fixed address within the kernel (step 6).
▪ The kernel code that starts examines the system call number and then dispatches to the correct
system call handler, usually via a table of pointers to system call handlers indexed on system
call number (step 7).
▪ At that point the system call handler runs (step 8).
▪ Once the system call handler has completed its work, control may be returned to the user-space
library procedure at the instruction following the TRAP instruction (step 9).
▪ This procedure then returns to the user program in the usual way procedure calls return (step
10).
▪ To finish the job, the user program has to clean up the stack, as it does after any procedure call
(step 11).

3)Explain process states model with diagram [07]


Ans:

▪ A process is a program which is currently in execution. A program by itself is not a process


but it is a passive entity just like content of a file stored on disk, while a process is an active
entity.

▪ A process also includes the process stack, which contains temporary data (such as local
variables, function parameters, return address), and a data section, which contains global
variables and a heap-memory allocated to a process to run and process state that defines its
current state.
Subject name: Operating System and Virtulization Subject code:3141601

▪ A process changes its state during its execution. Each process may be in one of the following
states:

1. New: when a new process is being created.


2. Running: A process is said to be in running state when instructions are being executed.
3. Waiting: The process is waiting for some event to occur (such as an I/O operation).
4. Ready: The process is waiting for processor.
5. Terminated: The process has finished execution.

Only one process can be in running state on any processor at a time while multiple processes
may be in ready and waiting state. The process state diagram shown below describes
different process states during its lifetime.

Process Control Block (PCB):

▪ Operating system maintains one special data structure called Process Control Block (PCB).
▪ All the information about each process is stored in the process control block (PCB) which is
maintained by operating system. It contains following information associated with a specific
process.

▪ Process state: It represents current status of the process. It may be new, ready, running or
waiting.
▪ Program counter: It indicates the address of the next instruction to be executed for this
process.
▪ CPU Registers: They include index registers, stack pointer and general purpose registers. It is
used to save process state when an interrupt occurs, so that it can resume from that state.
▪ CPU-scheduling information: it includes process priority, pointer to scheduling queue.
▪ Memory management information: value of the base and limit registers, page tables
depending on the memory system.
▪ Accounting information: it contains an amount of CPU and real time used, time limits process
number and so on.
▪ I/O status information: It includes a list of I/O devices allocated to the process, a list of open
files and so on.
Subject name: Operating System and Virtulization Subject code:3141601

4)Difference between process and thread. Explain the features of Time sharing
system[07]
Ans:

BASIS FOR PROCESS THREAD

COMPARISON

Basic Program in execution. Lightweight process or part of

it.

Memory sharing Completely isolated and do not share Shares memory with each

memory. other.

Resource More Less

consumption

Efficiency Less efficient as compared to the Enhances efficiency in the

process in the context of context of communication.

communication.

Time required for More Less

creation

Context switching Takes more time. Consumes less time.

time

Uncertain termination Results in loss of process. A thread can be reclaimed.

Time required for More Less

termination
Subject name: Operating System and Virtulization Subject code:3141601

Features of Time sharing system

▪ Time Sharing is a logical extension of multiprogramming.


▪ Multiple jobs are executed simultaneously by switching the CPU back and forth among them.
▪ The switching occurs so frequently (speedy) that the users cannot identify the presence of other
users or programs.
▪ Users can interact with his program while it is running in timesharing mode.
▪ Processor’s time is shared among multiple users. An interactive or hands on computer system
provides online communication between the user and the system.
▪ A time shared operating system uses CPU scheduling and multiprogramming to provide each
user with a small portion of a time shared computer. Each user has at least one

5)Define following terms. 1. Throughput 2. Waiting Time 3. Turnaround Time 4.


Response Time 5. Granularity 6. Short Term Scheduler 7. CPU Utilization[07]

1. Throughput is the amount of work completed in a unit of time. In other words throughput is the
processes executed to number of jobs completed in a unit of time. The scheduling algorithm must
look to maximize the number of jobs processed per time unit.

2. How much time processes spend in the ready queue waiting their turn to get on the CPU. ( Load
average - The average number of processes sitting in the ready queue waiting their turn to get into
the CPU.)

3. Turnaround time (TAT) is the time interval from the time of submission of a process to
the time of the completion of the process. It can also be considered as the sum of the time periods
spent waiting to get into memory or ready queue, execution on CPU and executing input/output

4. Response Time Response time is the difference between first execution time and Arrival time.
The time taken by the system to respond to an input and display of required updated information
is termed as the response time. The time when a job or process is completed

5. In parallel computing, granularity (or grain size) of a task is a measure of the amount of work
(or computation) which is performed by that task. Another definition of granularity takes into
account the communication overhead between multiple processors or processing elements

6. The short-term scheduler (also known as the CPU scheduler) decides which of the ready, in-
memory processes is to be executed (allocated a CPU) after a clock interrupt, an I/O interrupt,
an operating system call or another form of signal

7. CPU utilization refers to a computer's usage of processing resources, or the amount of work
handled by a CPU. Actual CPU utilization varies depending on the amount and type of managed
computing tasks. Certain tasks require heavy CPU time, while others require less because of non-
CPU resource requirements.

6) Explain Swapping and Fragmentation in detail[07]

Ans:
Subject name: Operating System and Virtulization Subject code:3141601

Swapping

▪ In practice total amount of memory needed by all the processes is often much more than the
available memory.
▪ Swapping is used to deal with memory overhead.
▪ Swapping consists of bringing in each process in its entirety, running it for a while, then putting
it back on the disk.
▪ The event of copying process from hard disk to main memory is called as Swapped- in.
▪ The event of copying process from main memory to hard disk is called as Swapped- out.
▪ When swapping creates multiple holes in memory, it is possible to combine them all into one
big one by moving all the processes downward as far as possible. This technique is called as
memory compaction.
▪ Two ways to implement Swapping System
▪ Multiprogramming with Fixed partitions.
▪ Multiprogramming with dynamic partitions.
▪ Fragmentation occurs in a dynamic memory allocation system when many of the free blocks
are too small to satisfy any request.
▪ RAM Fragmentation
▪ Fragmentation can also refer to RAM that has small, unused holes scattered throughout it. This
is called external fragmentation. With modern operating systems that use a paging scheme, a
more common type of RAM fragmentation is internal fragmentation.This occurs when
memory is allocated in frames and the frame size is larger than the amount of memory
requested.
▪ External Fragmentation: External Fragmentation happens when a dynamic memory
allocation algorithm allocates some memory and a small piece is left over that cannot be
effectively used. If too much external fragmentation occurs, the amount of usable memory is
drastically reduced. Total memory space exists to satisfy a request, but it is not contiguous.
▪ Internal Fragmentation: Internal fragmentation is the space wasted inside of allocated
memory blocks because of restriction on the allowed sizes of allocated blocks. Allocated
memory may be slightly larger than requested memory; this size difference is memory internal
to a partition, but not being used

7) Explain Thread Life Cycle with diagram. Explain Distributed OS with neat sketch
and give its pros and cons.[07]
Ans:

1. New state ? After the creations of Thread instance the thread is in this state but before
the start() method invocation. At this point, the thread is considered not alive.

2. Runnable (Ready-to-run) state ? A thread start its life from Runnable state. A thread
first enters runnable state after the invoking of start() method but a thread can return to
this state after either running, waiting, sleeping or coming back from blocked state also.
On this state a thread is waiting for a turn on the processor.

3. Running state ? A thread is in running state that means the thread is currently
executing. There are several ways to enter in Runnable state but there is only one way
to enter in Running state: the scheduler select a thread from runnable pool.
Subject name: Operating System and Virtulization Subject code:3141601

4. Dead state ? A thread can be considered dead when its run() method completes. If any
thread comes on this state that means it cannot ever run again.
5. Blocked - A thread can enter in this state because of waiting the resources that are hold
by another thread.

Distributed os:
Operating system is developed to ease people daily life. For user benefits and needs the
operating system may be single user or distributed. In distributed systems, many computers
connected to each other and share their resources with each other.

Advantages of distributed operating systems:-

• Give more performance than single system


• If one pc in distributed system malfunction or corrupts then other node or pc will take
care of
• More resources can be added easily
• Resources like printers can be shared on multiple pc’s

Disadvantages of distributed operating systems:-

• Security problem due to sharing


• Some messages can be lost in the network system
Subject name: Operating System and Virtulization Subject code:3141601

• Bandwidth is another problem if there is large data then all network wires to be replaced
which tends to become expensive
• Overloading is another problem in distributed operating systems
• If there is a database connected on local system and many users accessing that database
through remote or distributed way then performance become slow

8) Explain all Accessing Methods of File [07]


Ans:

When a file is used, information is read and accessed into computer memory and there are
several ways to access this information of the file. Some systems provide only one access
method for files. Other systems, such as those of IBM, support many access methods, and
choosing the right one for a particular application is a major design problem.
There are three ways to access a file into a computer system: Sequential-Access, Direct
Access, Index sequential Method.

1. Sequential Access –
It is the simplest access method. Information in the file is processed in order, one record
after the other. This mode of access is by far the most common; for example, editor and
compiler usually access the file in this fashion.

Read and write make up the bulk of the operation on a file. A read operation -read
next- read the next position of the file and automatically advance a file pointer, which
keeps track I/O location. Similarly, for the writewrite next append to the end of the file
and advance to the newly written material.
1. Key points:
• Data is accessed one record right after another record in an order.
• When we use read command, it move ahead pointer by one
• When we use write command, it will allocate memory and move the pointer to the
end of the file
• Such a method is reasonable for tape.

2. Direct Access –
Another method is direct access method also known as relative access method. A filed-
length logical record that allows the program to read and write record rapidly. in no
particular order. The direct access is based on the disk model of a file since disk allows
random access to any file block. For direct access, the file is viewed as a numbered
sequence of block or record. Thus, we may read block 14 then block 59 and then we can
write block 17. There is no restriction on the order of reading and writing for a direct
access file.
A block number provided by the user to the operating system is normally a relative
block number, the first relative block of the file is 0 and then 1 and so on.

3. Index sequential method –


It is the other method of accessing a file which is built on the top of the direct access
method. These methods construct an index for the file. The index, like an index in the
Subject name: Operating System and Virtulization Subject code:3141601

back of a book, contains the pointer to the various blocks. To find a record in the file,
we first search the index and then by the help of pointer we access the file directly.
Key points:
• It is built on top of Sequential access.
• It control the pointer by using index.

9) Which are the major goals of I/O software? Explain DMA[07]


Ans:
the major goals of I/O software

Device Independence:
• It should be possible to write programs that can access any I/O devices
without having to specify device in advance.
• For example, a program that reads a file as input should be able to read a file
on a floppy disk, on a hard disk, or on a CD-ROM, without having to modify
the program for each different device.

Uniform naming:
• Name of file or device should be some specific string or number. It must
not depend upon device in any way.
• In UNIX, all disks can be integrated in file system hierarchy in arbitrary
way so user need not be aware of which name corresponds to which device.
• All files and devices are addressed the same way: by a path name.

Error handling:
• Error should be handled as close to hardware as possible. If any controller
generates error then it tries to solve that error itself. If controller can’t solve
that error then device driver should handle that error, perhaps by reading all
blocks again.
• Many times when error occur, error solve in lower layer. If lower layer are
not able to handle error problem should be told to upper layer.
• In many cases error recovery can be done at a lower layer without the upper
layers even knowing about error.

Synchronous versus Asynchronous:


• Most of devices are asynchronous device. CPU starts transfer and goes off
to do something else until interrupt occurs. I/O Software needs to support
both the types of devices.
• User programs are much easier to write if the I/O operations are blocking.
• It is up to the operating system to make operations that are actually interrupt-
driven look blocking to the user programs.
Subject name: Operating System and Virtulization Subject code:3141601

Buffering:
• Data comes in main memory cannot be stored directly. For example data packets
come from the network cannot be directly stored in physical memory. Packets have to
be put into output buffer for examining them.
Direct Memory Access.
• CPU needs to address the device controllers to exchange data with them.
• CPU can request data from an I/O controller one byte at a time, which
is wastage of time.
• So a different scheme called DMA (Direct Memory Access) is used.
The operating system can only use DMA if the hardware has DMA
controller.
• A DMA controller is available for regulating transfers to multiple devices.
• The DMA controller has separate access to the system bus independent
to CPU as shown in figure 6-2. It contains several registers that can be
written and read by CPU.
• These registers includes memory address register, a byte count register,
one or more control registers.

The buses can be operated in two modes:


• Word-at-a-time mode: Here the DMA requests for the transfer of one
word and gets it. If CPU wants the bus at same time then it has to wait.
This mechanism is known as Cycle Stealing as the device controller
sneaks in and steals an occasional bus cycle from CPU, delaying it
slightly.
• Block mod: Here the DMA controller tells the device to acquire the bus,
issues a series of transfer and then releases the bus. This form of the
operation is called Burst mode. It is more efficient then cycle stealing.
Subject name: Operating System and Virtulization Subject code:3141601

Disadvantages of DMA:
Generally the CPU is much faster than the DMA controller and can do the job
much faster so if there is no other work for it to do then CPU needs to wait for the
slower DMA.
10) What is Semaphore? Explain its properties along with drawbacks. Explain any
problem and solve it by Semaphore.[07]
Ans:

Semaphore:
A semaphore is a variable that provides an abstraction for controlling access of a
shared resource by multiple processes in a parallel programming environment.

Properties of Semaphores

1. It's simple and always have a non-negative Integer value.


2. Works with many processes.
3. Can have many different critical sections with different semaphores.
4. Each critical section has unique access semaphores.

Disadvantages of Semaphores
Some of the disadvantages of semaphores are as follows:

1. Semaphores are complicated so the wait and signal operations must be implemented in
the correct order to prevent deadlocks.
2. Semaphores are impractical for last scale use as their use leads to loss of modularity.
This happens because the wait and signal operations prevent the creation of a structured
layout for the system.
3. Semaphores may lead to a priority inversion where low priority processes may access
the critical section first and high priority processes later.

Bounded Buffer Producer Consumer problem:


A buffer of size N is shared by several processes. We are given sequential code
▪ insert_item - adds an item to the buffer.
▪ remove_item - removes an item from the buffer.
We want functions insert _item and remove_item such that the following hold:
1. Mutually exclusive access to buffer: At any time only one process should be
executing insert_item or remove_item.
2. No buffer overflow: A process executes insert_item only when the buffer is
not full (i.e., the process is blocked if the buffer is full).
Subject name: Operating System and Virtulization Subject code:3141601

3. No buffer underflow: A process executes remove_item only when the buffer


is not empty (i.e., the process is blocked if the buffer is empty).
4. No busy waiting.
5. No producer starvation: A process does not wait forever at insert_item()
provided the buffer repeatedly becomes full.
6. No consumer starvation: A process does not wait forever at remove_item()
provided the buffer repeatedly becomes empty.

11) Which are the necessary conditions for Deadlock? Explain Deadlock recovery in brief.[07]
Ans:

There are four conditions that must hold for deadlock:


Mutual exclusion condition
Each resource is either currently assigned to exactly one process or is
available.
Hold and wait condition
Process currently holding resources granted earlier can request more resources.
No preemption condition
Previously granted resources cannot be forcibly taken away from process.
Circular wait condition
There must be a circular chain of 2 or more processes.
Each process is waiting for resource that is held by next member of the chain.

Deadlock recovery
• Recovery through preemption
▪ In some cases it may be possible to temporarily take a resource away from
its current owner and give it to another process.
▪ The ability to take a resource away from a process, have another process
use it, and then give it back without the process noticing it is highly
dependent on the nature of the resource.
▪ Recovering this way is frequently difficult or impossible.
▪ Choosing the process to suspend depends largely on which ones have
resources that can easily be taken back.

• Recovery through rollback


▪ Create a checkpoint.
▪ Checkpoint a process periodically.
Checkpointing a process means that its state is written to a file so that it can be restarted
later.
Subject name: Operating System and Virtulization Subject code:3141601

▪ The checkpoint contains not only the memory image, but also the resource
state, that is, which resources are currently assigned to the process.
▪ When a deadlock is detected, it is easy to see which resources are needed.
▪ To do the recovery, a process that owns a needed resource is rolled back to
a point in time before it acquired some other resource by starting one of its
earlier checkpoints.
▪ In effect, the process is reset to an earlier moment when it did not have the
resource, which is now assigned to one of the deadlocked processes.
▪ If the restarted process tries to acquire the resource again, it will have to
wait until it becomes available.

• Recovery through killing processes


▪ The crudest, but simplest way to break a deadlock is to kill one or more
processes.
▪ One possibility is to kill a process in the cycle. With a little luck, the other
processes will be able to continue.
▪ If this does not help, it can be repeated until the cycle is broken.
▪ Alternatively, a process not in the cycle can be chosen as the victim in
order to release its resources.
▪ In this approach, the process to be killed is carefully chosen because it is
holding resources that some process in the cycle needs.
12)Explain TLB and Virtual Memory [07]
Ans:

Translation Lookaside Buffer (TLB)

In Operating System (Memory Management Technique : Paging), for each process page table
will be created, which will contain Page Table Entry (PTE). This PTE will contain information
like frame number (The address of main memory where we want to refer), and some other
useful bits (e.g., valid/invalid bit, dirty bit, protection bit etc). This page table entry (PTE) will
tell where in the main memory the actual page is residing.
Now the question is where to place the page table, such that overall access time (or reference
time) will be less.
The problem initially was to fast access the main memory content based on address generated
by CPU (i.e logical/virtual address). Initially, some people thought of using registers to store
page table, as they are high-speed memory so access time will be less.
Steps in TLB hit:
1. CPU generates virtual address.
2. It is checked in TLB (present).
3. Corresponding frame number is retrieved, which now tells where in the main memory
page lies.
Steps in Page miss:
1. CPU generates virtual address.
2. It is checked in TLB (not present).
Subject name: Operating System and Virtulization Subject code:3141601

3. Now the page number is matched to page table residing in main memory (assuming page
table contains all PTE).
4. Corresponding frame number is retrieved, which now tells where in the main memory
page lies.
5. The TLB is updated with new PTE (if space is not there, one of the replacement technique
comes into picture i.e either FIFO, LRU or MFU etc).
Virtual Memory:

Virtual Memory is a storage allocation scheme in which secondary memory can be addressed
as though it were part of main memory. The addresses a program may use to reference memory
are distinguished from the addresses the memory system uses to identify physical storage sites,
and program generated addresses are translated automatically to the corresponding machine
addresses.

The size of virtual storage is limited by the addressing scheme of the computer system and
amount of secondary memory is available not by the actual number of the main storage
locations.
It is a technique that is implemented using both hardware and software. It maps memory
addresses used by a program, called virtual addresses, into physical addresses in computer
memory.
1. All memory references within a process are logical addresses that are dynamically
translated into physical addresses at run time. This means that a process can be swapped
in and out of main memory such that it occupies different places in main memory at
different times during the course of execution.
2. A process may be broken into number of pieces and these pieces need not be continuously
located in the main memory during execution.
Subject name: Operating System and Virtulization Subject code:3141601

13) Explain thread implementation in user space with its advantages and
disadvantages[07]
Ans:

User level threads

User level threads are supported above the kernel in user space and are managed without kernel
support.

• Threads managed entirely by the run-time system (user-level library).


• Ideally, thread operations should be as fast as a function call.
• The kernel knows nothing about user-level threads and manage them as if they where
single-threaded processes.

Advantages

• Can be implemented on an OS that does not suport kernel-level threads.


• Does not require modifications of the OS.
• Simple representation: PC, registers, stack and small thread control block all stored in
the user-level process address space.
• Simple management: Creating, switching and synchronizing threads done in user-space
without kernel intervention.
• Fast and efficient: switching threads not much more expensive than a function call.

Disadvantages

• Not a perfect solution (a trade off).


• Lack of coordination between the user-level thread manager and the kernel.
• OS may make poor decisions like:
o scheduling a process with idle threads
o blocking a process due to a blocking thread even though the process has other
threads that can run
o giving a process as a whole one time slice irrespective of whether the process has
1 or 1000 threads
o unschedule a process with a thread holding a lock.
• May require communication between the kernel and the user-level thread manager
(scheduler activations) to overcome the above problems.

User-level thread models

In general, user-level threads can be implemented using one of four models.

• Many-to-one
• One-to-one
• Many-to-many
• Two-level
Subject name: Operating System and Virtulization Subject code:3141601

All models maps user-level threads to kernel-level threads. A kernel thread is similar to a
process in a non-threaded (single-threaded) system. The kernel thread is the unit of execution
that is scheduled by the kernel to execute on the CPU. The term virtual processor is often used
instead of kernel thread.

Many-to-one

In the many-to-one model all user level threads execute on the same kernel thread. The process
can only run one user-level thread at a time because there is only one kernel-level thread
associated with the process.

The kernel has no knowledge of user-level threads. From its perspective, a process is an opaque
black box that occasionally makes system calls.

One-to-one

In the one-to-one model every user-level thread execute on a separate kernel-level thread.

In this model the kernel must provide a system call for creating a new kernel thread.
Subject name: Operating System and Virtulization Subject code:3141601

Many-to-many

In the many-to-many model the process is allocated m number of kernel-level threads to


execute n number of user-level thread.

Two-level

The two-level model is similar to the many-to-many model but also allows for certain user-
level threads to be bound to a single kernel-level thread.

14) List the different file implementation methods and explain them in detail.[07]
Ans:

Probably the most important issue in implementing file storage is keeping


track of which blocks go with which file. Various methods to implement
files are listed below,
Subject name: Operating System and Virtulization Subject code:3141601

• Contiguous Allocation
• Linked List Allocation
• Linked List Allocation Using A Table In Memory
• I-nodes

Contiguous Allocation

▪ The simplest allocation scheme is to store each file as a contiguous run of disk block.
▪ We see an example of contiguous storage allocation in fig. 7-5.
▪ Here the first 40 disk blocks are shown, starting with block 0 on the left. Initially, the
▪ disk was empty.

Advantages

▪ First it is simple to implement because keeping track of where a file’s blocks are is reduced to
remembering two numbers: The disk address of the first block and the number of blocks in the
file.
▪ Second, the read performance is excellent because the entire file can be read from the disk in a
single operation. Only one seek is needed (to the first block), so data comes in at the full
bandwidth of the disk.
▪ Thus contiguous allocation is simple to implement and has high performance

Drawbacks

▪ Over the course of time, the disk becomes fragmented.


▪ Initially, fragmentation is not a problem, since each new file can be written at the end of the
disk, following the previous one.
▪ However, eventually the disk will fill up and it will become necessary to either compact the
disk, which is expensive, or to reuse the free space in the holes.
▪ Reusing the space requires maintaining a list of hole.
▪ However, when a new file is to be created, it is necessary to know its final size in order to
choose a hole of the correct size to place it in.
Subject name: Operating System and Virtulization Subject code:3141601

▪ There is one situation in which continuous allocation is feasible and in fact, widely used: on
CD-ROMs. Here all the file sizes are known in advance and will never change during use of
CD-ROM file system.
▪ Linked List Allocation

▪ Another method for storing files is to keep each one as a linked list of the disk blocks
▪ The first word of each block is used as a pointer to the next one. The rest of the block is for
data.
▪ Unlike contiguous allocation, every disk block can be used in this method. No space is lost to
disk fragmentation.
▪ It is sufficient for a directory entry to store only disk address of the first block, rest can be found
starting there.

Drawbacks

▪ Although reading a file sequentially is straightforward, random access is extremely slow. To


get to block n, the operating system has to start at the beginning and read the n-1 blocks prior
to it, one at a time.
▪ nodes
▪ A method for keeping track of which blocks belong to which file is to associate with each file
a data structure called an i-node (index-node), which lists the attributes and
▪ disk addresses of the file’s blocks.
▪ A simple example is given in figure 7-8.
▪ Given the i-node, it is then possible to find all the blocks of the file.
▪ The big advantage of this scheme over linked files using an in-memory table is that i- node
need only be in memory when the corresponding file is open.
▪ If each i-node occupies n bytes and a maximum of k files may be open at once, the total memory
occupied by the array holding the i-nodes for the open files is only kn bytes. Only this much
space needs to be reserved in advance.
▪ One problem with i-nodes is that if each one has room for a fixed number of disk addresses,
what happens when a file grows beyond this limit?
Subject name: Operating System and Virtulization Subject code:3141601

15) Explain Bankers’ algorithm to avoid deadlock[07]


Ans:

Deadlock avoidance and bankers algorithm for deadlock avoidance


Deadlock can be avoided by allocating resources carefully
• Carefully analyze each resource request to see if it can be safely granted.
• Need an algorithm that can always avoid deadlock by making right choice all
the time.
Banker’s algorithm for single resource
• A scheduling algorithm that can avoid deadlocks is due to Dijkstra (1965); it
is known as the banker's algorithm and is an extension of the deadlock
detection algorithm.
• It is modeled on the way a small town banker might deal with a group of
customers to whom he has granted lines of credit.
• What the algorithm does is check to see if granting the request leads to an
unsafe state. If it does, the request is denied.
• If granting the request leads to a safe state, it is carried out.
• In fig. 4-8 we see four customers, A, B, C, and D, each of whom has been
granted a certain number of credit units.
• The banker knows that not all customers will need their maximum credit at a
time, so he has reserved only 10 units rather than 22 to service them.
Subject name: Operating System and Virtulization Subject code:3141601

• The customers go about their respective businesses, making loan requests


from time to time (i.e. asking for resources).
• First if we have situation as per fig (a) then it is safe state because with 10 free
units one by one all customers can be served.
• Second situation is as shown in fig. (b) This state is safe because with two
units left (free units), the banker can allocate units to C, thus letting C finish
and release all four of his resources.
With those four free units, the banker can let either D or B have the necessary units, and
so on.
• Consider the third situation, what would happen if a request from B for one
more unit were granted as shown in fig. (c) then it becomes unsafe state.
• In this situation we have only one free unit and minimum 2 units are required
by C. No one can get all resources to complete their work so it is unsafe state.
16) Explain different types of OS and also Explain different types of tasks done by
OS.[07]
Ans:
Types of Operating system

Batch Operating System

Some computer processes are very lengthy and time-consuming. To speed the same process,
a job with a similar type of needs are batched together and run as a group.

The user of a batch operating system never directly interacts with the computer. In this type
of OS, every user prepares his or her job on an offline device like a punch card and submit it
to the computer operator.

Multi-Tasking/Time-sharing Operating systems

Time-sharing operating system enables people located at a different terminal(shell) to use a


single computer system at the same time. The processor time (CPU) which is shared among
multiple users is termed as time sharing.

Real time OS
Subject name: Operating System and Virtulization Subject code:3141601

A real time operating system time interval to process and respond to inputs is very small.
Examples: Military Software Systems, Space Software Systems.

Distributed Operating System

Distributed systems use many processors located in different machines to provide very fast
computation to its users.

Network Operating System

Network Operating System runs on a server. It provides the capability to serve to manage
data, user, groups, security, application, and other networking functions.

Mobile OS

Mobile operating systems are those OS which is especially that are designed to power
smartphones, tablets, and wearables devices.

different types of tasks done by OS.


• Operating system services and facilities can be grouped into following areas:
Program development
• Operating system provides editors and debuggers to assist (help) the programmer
in creating programs.
• Usually these services are in the form of utility programs and not strictly part of
core operating system. They are supplied with operating system and referred as
application program development tools.
Program execution
▪ A number of tasks need to be performed to execute a program, such as instructions
and data must be loaded into main memory. I/O devices and files must be
initialized.
▪ The operating system handles these scheduling duties for the user.
Access to I/O devices
▪ Each I/O devices requires its own set of instruction for operations.
▪ Operating system provides a uniform interface that hides these details, so the
programmer can access such devices using simple reads and writes.
Memory Management
▪ Operating System manages memory hierarchy.
▪ It keeps the track of which parts of memory are in use and free memory.
▪ It allocates the memory to programs when they need it.
▪ It de-allocates the memory when programs finish execution.
Controlled access to file
▪ In the case of file access, operating system provides a directory hierarchy for easy
Subject name: Operating System and Virtulization Subject code:3141601

access and management of files.


▪ OS provides various file handling commands using which users can easily read,
write, and modify files.
▪ In case of system with multiple users, the operating system may provide protection
mechanism to control access to file.
System access
▪ In case of public systems, the operating system controls access to the system as a
whole.
▪ The access function must provide protection of resources and data from
unauthorized users.
Error detection and response
▪ Various types of errors can occur while a computer system is running, which
includes internal and external hardware errors. For example, memory error, device
failure error and software errors as arithmetic overflow.
▪ In case, operating system must provide a response that clears error condition with
least impact on running applications.
Accounting
▪ A good operating system collects usage for various resources and monitor
performance parameters.
▪ On any system, this information is useful in anticipating need for future
enhancements.
Protection & Security
▪ Operating systems provides various options for protection and security purpose.
▪ It allows the users to secure files from unwanted usage.
▪ It protects restricted memory area from unauthorized access.
Protection involves ensuring that all access to system resources is controlled
17) Define and explain following terms: (i) Authentication (ii) Mutual Exclusion (iii)
Monitor (iv) Segmentation[07]
Ans:

(i) Authentication
Authentication refers to identifying each user of the system and associating the executing
programs with those users. It is the responsibility of the Operating System to create a protection
system which ensures that a user who is running a particular program is authentic. Operating
Systems generally identifies/authenticates users using following three ways −
• Username / Password − User need to enter a registered username and password with
Operating system to login into the system.
• User card/key − User need to punch card in card slot, or enter key generated by key generator
in option provided by operating system to login into the system.
Subject name: Operating System and Virtulization Subject code:3141601

• User attribute - fingerprint/ eye retina pattern/ signature − User need to pass his/her
attribute via designated input device used by operating system to login into the system.
(ii) Mutual Exclusion
A mutual exclusion (mutex) is a program object that prevents simultaneous access to a shared
resource. This concept is used in concurrent programming with a critical section, a piece of
code in which processes or threads access a shared resource. Only one thread owns the mutex
at a time, thus a mutex with a unique name is created when a program starts. When a thread
holds a resource, it has to lock the mutex from other threads to prevent concurrent access of
the resource. Upon releasing the resource, the thread unlocks the mutex.
the same time. It acts as a lock and is the most basic synchronization tool. When a thread tries
to acquire a mutex, it gains the mutex if it is available, otherwise the thread is set to sleep
condition. Mutual exclusion reduces latency and busy-waits using queuing and context
switches. Mutex can be enforced at both the hardware and software levels
(iii) Monitor
The monitor is one of the ways to achieve Process synchronization. The monitor is supported
by programming languages to achieve mutual exclusion between processes. For example Java
Synchronized methods. Java provides wait() and notify() constructs.
1. It is the collection of condition variables and procedures combined together in a special kind
of module or a package.
2. The processes running outside the monitor can’t access the internal variable of the monitor but
can call procedures of the monitor.
3. Only one process at a time can execute code inside monitors.

(iv)Segmentation

In Operating Systems, Segmentation is a memory management technique in which, the


memory is divided into the variable size parts. Each part is known as segment which can be
allocated to a process.

The details about each segment are stored in a table called as segment table. Segment table is
stored in one (or many) of the segments.

Segment table contains mainly two information about segment:

1. Base: It is the base address of the segment


2. Limit: It is the length of the segment.

18) Explain Context Switching. Discuss performance evaluation of FCFS (First Come
First Serve) & RR (Round Robin) scheduling[07]
Ans:

Switching the CPU to another process requires saving the state of the old process and
loading the saved state for the new process.
This task is known as a context switch.
The context of a process is represented in the PCB of a process; it includes the
Subject name: Operating System and Virtulization Subject code:3141601

value of the CPU registers, the process state and memory-management


information.
When a context switch occurs, the kernel saves the context of the old process in
its PCB and loads the saved context of the new process scheduled to run.
Context-switch time is pure overhead, because the system does no useful work
while switching.
FCFS (First Come First Serve):
• Selection criteria :
The process that request first is served first. It means that processes are served in the exact
order of their arrival.
• Decision Mode :
Non preemptive: Once a process is selected, it runs until it is blocked for an I/O or some
event, or it is terminated.
• Implementation:
This strategy can be easily implemented by using FIFO queue, FIFO means First In First Out.
When CPU becomes free, a process from the first position in a queue is selected to run.
• Advantages:
➢ Simple, fair, no starvation.
➢ Easy to understand, easy to implement.
• Disadvantages :
➢ Not efficient. Average waiting time is too high.
➢ Convoy effect is possible. All small I/O bound processes wait for one big CPU bound process
to acquire CPU.
➢ CPU utilization may be less efficient especially when a CPU bound process is running with
many I/O bound processes.
RR (Round Robin) scheduling
• Selection Criteria:
• Each selected process is assigned a time interval, called time quantum or time slice. Process
is allowed to run only for this time interval. Here, two things are possible: First, Process is
either blocked or terminated before the quantum has elapsed. In this case the CPU switching
is done and another process is scheduled to run. Second, Process needs CPU burst longer
than time quantum. In this case, process is running at the end of the time quantum. Now, it
will be preempted and moved to the end of the queue. CPU will be allocated to another
process. Here, length of time quantum is critical to determine.
• Decision Mode:
Preemptive:
• Implementation :
This strategy can be implemented by using circular FIFO queue. If any process comes, or
process releases CPU, or process is preempted. It is moved to the end of the queue. When CPU
becomes free, a process from the first position in a queue is selected to run.
Subject name: Operating System and Virtulization Subject code:3141601

• Advantages:
➢ One of the oldest, simplest, fairest and most widely used algorithms.
• Disadvantages:
➢ Context switch overhead is there.
➢ Determination of time quantum is too critical. If it is too short, it causes frequent context
switches and lowers CPU efficiency. If it is too long, it causes poor response for short
interactive process.
19) Explain the following allocation algorithms: 1) First-fit 2) Best-fit 3) Worst-fit[07]
Ans:
First Fit
In the first fit approach is to allocate the first free partition or hole large enough which can
accommodate the process. It finishes after finding the first suitable free partition.
Search Starts from the starting location of the memory.
First available hole that is large enough to hold the process is selected for allocation.
The hole is then broken up into two pieces, one for process and another for unused
memory.
Search time is smaller here.
Memory loss is higher, as very large hole may be selected for small process.
Here process of size 426k will not get any partition for allocation.

100k 176k 200k 300k


183k

212k 112k 417k

Advantage

Fastest algorithm because it searches as little as possible.

Disadvantage

The remaining unused memory areas left after allocation become waste if it is too smaller.
Thus request for larger memory requirement cannot be accomplished.
Best Fit
The best fit deals with allocating the smallest free partition which meets the requirement of the
requesting process. This algorithm first searches the entire list of free partitions and considers
the smallest hole that is adequate. It then tries to find a hole which is close to actual process
size needed.
Entire memory is searched here.
The smallest hole, which is large enough to hold the process, is selected for
allocation.
Subject name: Operating System and Virtulization Subject code:3141601

Search time is high, as it searches entire memory.


Memory loss is less. More sensitive to external fragmentation, as it leaves tiny
holes into which no process can fit.

100k 500k 200k 300k 600k

417k 112k 212k 426k

Advantage

Memory utilization is much better than first fit as it searches the smallest free partition first
available.

Disadvantage

It is slower and may even tend to fill up memory with tiny useless holes.
Worst fit
In worst fit approach is to locate largest available free portion so that the portion left will be
big enough to be useful. It is the reverse of best fit.
Entire memory is searched here also. The largest hole, which is largest enough to
hold the process, is selected for allocation.
This algorithm can be used only with dynamic partitioning.
Here process of size 426k will not get any partition for allocation.

100k 500k 200k 300k 276k

417k 212k 112k

Advantage

Reduces the rate of production of small gaps.

Disadvantage

If a process requiring larger memory arrives at a later stage then it cannot be accommodated
as the largest hole is already split and occupied.
Subject name: Operating System and Virtulization Subject code:3141601

20) What is deadlock? List the conditions that lead to deadlock. How deadlock can be
prevented?[07]
Ans:

Deadlock is a situation where-


• The execution of two or more processes is blocked because each process holds some
resource and waits for another resource held by some other process.
Example-

Here
• Process P1 holds resource R1 and waits for resource R2 which is held by process P2.
• Process P2 holds resource R2 and waits for resource R1 which is held by process P1.
• None of the two processes can complete and release their resource.
• Thus, both the processes keep waiting infinitely.

Conditions For Deadlock-

There are following 4 necessary conditions for the occurrence of deadlock-


1. Mutual Exclusion
2. Hold and Wait
3. No preemption
4. Circular wait

1. Mutual Exclusion-

By this condition,
• There must exist at least one resource in the system which can be used by only one process at
a time.
• If there exists no such resource, then deadlock will never occur.
• Printer is an example of a resource that can be used by only one process at a time.
Subject name: Operating System and Virtulization Subject code:3141601

2. Hold and Wait-

By this condition,
• There must exist a process which holds some resource and waits for another resource held by
some other process.

3. No Preemption-

By this condition,
• Once the resource has been allocated to the process, it can not be preempted.
• It means resource can not be snatched forcefully from one process and given to the other
process.
• The process must release the resource voluntarily by itself.

4. Circular Wait-

By this condition,
• All the processes must wait for the resource in a cyclic manner where the last process waits for
the resource held by the first process.

Here,
• Process P1 waits for a resource held by process P2.
• Process P2 waits for a resource held by process P3.
• Process P3 waits for a resource held by process P4.
• Process P4 waits for a resource held by process P1.
Subject name: Operating System and Virtulization Subject code:3141601

Deadlock Prevention
• Deadlock can be prevented by attacking the one of the four conditions that leads
to deadlock.
1) Attacking the Mutual Exclusion Condition
o No deadlock if no resource is ever assigned exclusively to a single process.
o Some devices can be spooled such as printer, by spooling printer output; several
processes can generate output at the same time.
o Only the printer daemon process uses physical printer.
o Thus deadlock for printer can be eliminated.
o Not all devices can be spooled.
▪ Principle: Avoid assigning a resource when that is not absolutely necessary.
▪ Try to make sure that as few processes as possible actually claim the resource.
2) Attacking the Hold and Wait Condition
o Require processes to request all their resources before starting execution.
o A process is allowed to run if all resources it needed is available. Otherwise
nothing will be allocated and it will just wait.
o Problem with this strategy is that a process may not know required resources at
start of run.
o Resource will not be used optimally.
o It also ties up resources other processes could be using.
o Variation: A process must give up all resources before making a new request.
Process is then granted all prior resources as well as the new ones only if all
required resources are available.
o Problem: what if someone grabs the resources in the meantime how can the
processes save its state?
3) Attacking the No Preemption Condition
o This is not a possible option.
o When a process P0 request some resource R which is held by another process P1
then resource R is forcibly taken away from the process P1 and allocated to P0.
o Consider a process holds the printer, halfway through its job; taking the printer
away from this process without having any ill effect is not possible.
4) Attacking the Circular Wait Condition
o To provide a global numbering of all the resources.
o Now the rule is this: processes can request resources whenever they want to, but
all requests must be made in numerical order.
o A process need not acquire them all at once.
o Circular wait is prevented if a process holding resource n cannot wait for resource
m, if m > n.
o No way to complete a cycle.
Subject name: Operating System and Virtulization Subject code:3141601

21) Explain the microkernel system architecture in detail[07]


Ans:
• With the layered approach, the designers have a choice where to draw the kernel
user boundary.
• Traditionally, all the layers went in the kernel, but that is not necessary.
• In fact, a strong case can be made for putting as little as possible in kernel mode
because bugs in the kernel can bring down the system instantly.
• In contrast, user processes can be set up to have less power so that a bug may not
be fatal.
• The basic idea behind the microkernel design is to achieve high reliability by
splitting the operating system up to into small, well defined modules, only one of
which the microkernel runs in kernels mode and the rest of all are powerless user
processes which would run in user mode.
• By running each device driver and file system as separate user processes, a bug
in one of these can crash that component but cannot crash the entire system.
• Examples of microkernel are Integrity, K42, L4, PikeOS, QNX, Symbian, and MINIX 3.
• MINIX 3 microkernel is only 3200 lines of C code and 800 lines of assembler
for low level functions such as catching interrupts and switching processes.
• The C code manages and schedules processes, handles inter-process
communication and offer a set of about 35 systems calls to the rest of OS to do
its work.

• The process structure of MINIX 3 is shown in figure 1-5, with kernel call handler
labeled as Sys.
• The device driver for the clock is also in the kernel because the scheduler
interacts closely with it. All the other device drivers run as separate user
processes.
• Outside the kernel, the system is structured as three layers of processes all
running in user mode.
Subject name: Operating System and Virtulization Subject code:3141601

• The lowest layer contains the device driver. Since they run in user mode they do
not have access to the I/O port space and cannot issue I/O commands directly.
• Above driver is another user mode layer containing servers, which do most of
the work of an operating system.
• One interesting server is the reincarnation server, whose job is to check if the other
servers and drivers are functioning correctly. In the event that a faulty one is detected, it
is automatically replaced without any user intervention.
• All the user programs lie on the top layer.

22) What is thread? Explain thread Structure? And explain any one type of thread in
details.[07]
Ans:

Thread:
• A program has one or more locus of execution. Each execution is called a
thread of execution.
• In traditional operating systems, each process has an address space and a single
thread of execution.
• It is the smallest unit of processing that can be scheduled by an operating system.
• A thread is a single sequence stream within in a process. Because threads have
some of the properties of processes, they are sometimes called lightweight
process- es. In a process, threads allow multiple executions of streams.
Thread Structure
• Process is used to group resources together and threads are the entities
scheduled for execution on the CPU.
• The thread has a program counter that keeps track of which instruction to
execute next.
• It has registers, which holds its current working variables.
• It has a stack, which contains the execution history, with one frame for each
proce- dure called but not yet returned from.
• Although a thread must execute in some process, the thread and its process
are dif- ferent concepts and can be treated separately.
• What threads add to the process model is to allow multiple executions to take
place in the same process environment, to a large degree independent of one
another.
• Having multiple threads running in parallel in one process is similar to
having multiple processes running in parallel in one computer.
Subject name: Operating System and Virtulization Subject code:3141601

• In former case, the threads share an address space, open files, and other resources.
• In the latter case, process share physical memory, disks, printers and other resources.
• In Fig(a) we see three traditional processes. Each process has its own address space
and a single thread of control.
• In contrast, in Fig(b) we see a single process with three threads of control.
• Although in both cases we have three threads, in Fig(a) each of them operates in a
different address space, whereas in Fig. (b) all three of them share the same address
space.
• Like a traditional process (i.e., a process with only one thread), a thread can be in
any one of several states: running, blocked, ready, or terminated.
• When multithreading is present, processes normally start with a single thread
present. This thread has the ability to create new threads by calling a library
procedure thread_create.
• When a thread has finished its work, it can exit by calling a library procedure
thread_exit.
• One thread can wait for a (specific) thread to exit by calling a procedure
thread_join. This procedure blocks the calling thread until a (specific) thread has
exited.
• Another common thread call is thread_yield, which allows a thread to voluntarily
give up the CPU to let another thread run.
Subject name: Operating System and Virtulization Subject code:3141601

Types of thread
1. User Level Threads
2. Kernel Level Threads
User Level Threads
• User level threads are implemented in user level libraries, rather than via systems
calls.
• So thread switching does not need to call operating system and to cause interrupt
to the kernel.
• The kernel knows nothing about user level threads and manages them as if they
were single threaded processes.
• When threads are managed in user space, each process needs its own private
thread table to keep track of the threads in that process.
• This table keeps track only of the per-thread properties, such as each thread’s pro-
gram counter, stack pointer, registers, state, and so forth.
• The thread table is managed by the run-time system.
Advantages
▪ It can be implemented on an Operating System that does not support threads.
▪ A user level thread does not require modification to operating systems.
▪ Simple Representation: Each thread is represented simply by a PC, registers, stack
and a small control block, all stored in the user process address space.
▪ Simple Management: This simply means that creating a thread, switching between
threads and synchronization between threads can all be done without intervention
of the kernel.
▪ Fast and Efficient: Thread switching is not much more expensive than a pro-
cedure call.
▪ User-level threads also have other advantages. They allow each process to have its
Subject name: Operating System and Virtulization Subject code:3141601

own customized scheduling algorithm.


Disadvantages
▪ There is a lack of coordination between threads and operating system ker- nel.
Therefore, process as a whole gets one time slice irrespective of wheth- er process
has one thread or 1000 threads within. It is up to each thread to give up control to
other threads.
▪ Another problem with user-level thread packages is that if a thread starts running,
no other thread in that process will ever run unless the first thread voluntarily gives
up the CPU.
▪ A user level thread requires non-blocking systems call i.e., a multithreaded kernel.
Otherwise, entire process will be blocked in the kernel, even if a sin- gle thread is
blocked but other runnable threads are present. For example, if one thread causes a
page fault, the whole process will be blocked.

23) Explain Round Robin algorithm with proper example.[07]


Ans:

Round Robin:
Selection Criteria:
Each selected process is assigned a time interval, called time quantum or time slice. Process
is allowed to run only for this time interval. Here, two things are possible: First, Process is
either blocked or terminated before the quantum has elapsed. In this case the CPU switching
is done and another process is scheduled to run. Second, Process needs CPU burst longer
than time quantum. In this case, process is running at the end of the time quantum. Now, it
will be preempted and moved to the end of the queue. CPU will be allocated to another
process. Here, length of time quantum is critical to determine.
Decision Mode:
Preemptive:
Implementation :
This strategy can be implemented by using circular FIFO queue. If any process comes, or
process releases CPU, or process is preempted. It is moved to the end of the queue. When CPU
becomes free, a process from the first position in a queue is selected to run.
Example :
Consider the following set of four processes. Their arrival time and time required to complete
the execution are given in the following table. All time values are in milliseconds. Consider
that time quantum is of 4 ms, and context switch overhead is of 1 ms.

Process Arrival Time Time required for completion


(T0) (∆T)

P0 0 10
P1 1 6
Subject name: Operating System and Virtulization Subject code:3141601

P2 3 2
P3 5 4
• Gantt Chart :
P1 P2 P0 P3 P1 P0
0 4 5 9 10 12 13 17 18 22 23 25 26 28
At 4ms, process P0 completes its time quantum. So it preempted and another process P1 is
allowed to run. At 12 ms, process P2 voluntarily releases CPU, and another process is
selected to run. 1 ms is wasted on each context switch as overhead. This
procedure is repeated till all process completes their execution.

• Statistics:

Arrival Completio Finish Turnaround Waiting


time (T0) n Time Time (T1) Time (TAT=T1- Time
(∆T) T0) (WT=TAT-
∆T)
0 10 28 28 18
1 6 25 24 18
3 2 12 9 7
5 4 22 17 13

Average Turnaround Time: (28+24+9+17)/4 = 78 / 4 = 19.5 ms


Average Waiting Time: (18+18+7+13)/4 = 56 / 4 = 14 ms
Advantages:
One of the oldest, simplest, fairest and most widely used algorithms.
Disadvantages:
Context switch overhead is there.
Determination of time quantum is too critical. If it is too short, it causes frequent context
switches and lowers CPU efficiency. If it is too long, it causes poor response for short
interactive process.
24) Explain context switching. Briefly describe SCAN[07]
Ans:

context switching
Context Switching involves storing the context or state of a process so that it can be reloaded
when required and execution can be resumed from the same point as earlier. This is a feature
of a multitasking operating system and allows a single CPU to be shared by multiple processes.
A diagram that demonstrates context switching is as follows:
Subject name: Operating System and Virtulization Subject code:3141601

In the above diagram, initially Process 1 is running. Process 1 is switched out and Process 2 is
switched in because of an interrupt or a system call. Context switching involves saving the state
of Process 1 into PCB1 and loading the state of process 2 from PCB2. After some time again
a context switch occurs and Process 2 is switched out and Process 1 is switched in again. This
involves saving the state of Process 2 into PCB2 and loading the state of process 1 from PCB1.
Context Switching Triggers
There are three major triggers for context switching. These are given as follows:
• Multitasking: In a multitasking environment, a process is switched out of the CPU so
another process can be run. The state of the old process is saved and the state of the
new process is loaded. On a pre-emptive system, processes may be switched out by the
scheduler.
• Interrupt Handling: The hardware switches a part of the context when an interrupt
occurs. This happens automatically. Only some of the context is changed to minimize
the time required to handle the interrupt.
• User and Kernel Mode Switching: A context switch may take place when a transition
between the user mode and kernel mode is required in the operating system.

switching the CPU to another process requires saving the state of the old process and
loading the saved state for the new process.
• This task is known as a context switch.
• The context of a process is represented in the PCB of a process; it includes
the value of the CPU registers, the process state and memory-management
information.
• When a context switch occurs, the kernel saves the context of the old
process in its PCB and loads the saved context of the new process scheduled
to run.
• Context-switch time is pure overhead, because the system does no useful
work while switching.
• Its speed varies from machine to machine, depending on the memory speed,
the num- ber of registers that must be copied, and the existence of special
instructions.
Subject name: Operating System and Virtulization Subject code:3141601

SCAN:

• From the current position disk arm starts in up direction and moves
towards the end, serving all the pending requests until end.
• At that end arm direction is reversed (down) and moves towards the other
end serving the pending requests on the way.
• As per SCAN request will be satisfied in order: 11, 12, 16, 34, 36, 50, 9, 1
• Total cylinder movement: (12-11) + (16-12) + (34-16) +(36-34) +(50-36) + (50-9)
+ (9-1)= 88

25) Write a Shell Script to find largest among the 3 given number. What is RAID?
Explain in brief.[07]
Ans:
echo "Enter Num1"
read num1
echo "Enter Num2"
read num2
echo "Enter Num3"
read num3
Subject name: Operating System and Virtulization Subject code:3141601

if [ $num1 -gt $num2 ] && [ $num1 -gt $num3 ]


then
echo $num1
elif [ $num2 -gt $num1 ] && [ $num2 -gt $num3 ]
then
echo $num2
else
echo $num3
fi
RAID – Redundant Array of Independent Disks
RAID (redundant array of independent disks; originally redundant array of inexpensive disks)
is a way of storing the same data in different places on multiple hard disks to protect data in
the case of a drive failure. However, not all RAID levels provide redundancy.

Standard RAID levels

RAID 0: This configuration has striping, but no redundancy of data. It offers the best
performance, but no fault tolerance.

RAID 1: Also known as disk mirroring, this configuration consists of at least two drives that
duplicate the storage of data. There is no striping. Read performance is improved since either
disk can be read at the same time. Write performance is the same as for single disk storage.
Subject name: Operating System and Virtulization Subject code:3141601

RAID 2: This configuration uses striping across disks, with some disks storing error checking
and correcting (ECC) information. It has no advantage over RAID 3 and is no longer used.

RAID 3: This technique uses striping and dedicates one drive to storing parity information.
The embedded ECC information is used to detect errors. Data recovery is accomplished by
calculating the exclusive OR (XOR) of the information recorded on the other drives. Since an
I/O operation addresses all the drives at the same time, RAID 3 cannot overlap I/O. For this
reason, RAID 3 is best for single-user systems with long record applications.
Subject name: Operating System and Virtulization Subject code:3141601

RAID 4: This level uses large stripes, which means you can read records from any single drive.
This allows you to use overlapped I/O for read operations. Since all write operations have to
update the parity drive, no I/O overlapping is possible. RAID 4 offers no advantage over RAID
5.

RAID 5: This level is based on block-level striping with parity. The parity information is
striped across each drive, allowing the array to function even if one drive were to fail. The
array's architecture allows read and write operations to span multiple drives. This results in
performance that is usually better than that of a single drive, but not as high as that of a RAID
0 array. RAID 5 requires at least three disks, but it is often recommended to use at least five
disks for performance reasons.
Subject name: Operating System and Virtulization Subject code:3141601

RAID 5 arrays are generally considered to be a poor choice for use on write-intensive systems
because of the performance impact associated with writing parity information. When a disk
does fail, it can take a long time to rebuild a RAID 5 array. Performance is usually degraded
during the rebuild time, and the array is vulnerable to an additional disk failure until the rebuild
is complete.

RAID 6: This technique is similar to RAID 5, but includes a second parity scheme that is
distributed across the drives in the array. The use of additional parity allows the array to
continue to function even if two disks fail simultaneously. However, this extra protection
comes at a cost. RAID 6 arrays have a higher cost per gigabyte (GB) and often have slower
write performance than RAID 5 arrays.
Subject name: Operating System and Virtulization Subject code:3141601

26) Explain the following commands in UNIX:


suid, wall, man, finger, ls, cat, ps [07]
Ans:

1) ls:- Lists the contents of a directory


Syntax :- ls [options]

Description :-
-a Shows you all files, even files that are hidden (these files begin with a dot.)
-A List all files including the hidden files. However, does not display the working
directory (.) or the parent directory (..).
-d If an argument is a directory it only lists its name not its contents
-l Shows you huge amounts of information (permissions, owners, size, and when last
modified.)
-p Displays a slash ( / ) in front of all directories
-r Reverses the order of how the files are displayed
-R Includes the contents of subdirectories

3)cat:- It is used to create, display and concatenate file contents.


Syntax : - cat [options] [FILE]...

Description :-
-A Show all.
-b Omits line numbers for blank space in the output.
-e A $ character will be printed at the end of each line prior to a new line.
-E Displays a $ (dollar sign) at the end of each line.
-n Line numbers for all the output lines.
-s If the output has multiple empty lines it replaces it with one empty line.
-T Displays the tab characters in the output.
Non-printing characters (with the exception of tabs, new-lines and form-feeds) are
-v
printed visibly.
Subject name: Operating System and Virtulization Subject code:3141601

Two basically three uses of the cat command.


Create new files.
Display the contents of an existing file.
Concatenate the content of multiple files and display.

3)ps:- It is used to report the process status. ps is the short name for Process Status.
Syntax:- ps [options]

Description :-
-a List information about all processes most frequently requested: all those except process
group leaders and processes not associated with a terminal
-A List information for all processes. Identical to -e, below
-f Generate a full listing
-j Print session ID and process group ID
-l Generate a long listing

4) man:- man command which is short for manual, provides in depth


information about the requested command (or) allows users to search for
commands related to a particular keyword.
Syntax:- man commandname [options]
Description :-

-a Print a one-line help message and exit.


-k Searches for keywords in all of the manuals available.

5) wall :- send a message to everybody's terminal.


Syntax :- wall [ message ]
➢ Wall sends a message to everybody logged in with their mesg(1) permission set
to yes. The message can be given as an argument to wall, or it can be sent to
wall's standard input. When using the standard input from a terminal, the
message should be terminated with the EOF key (usually Control-D).
➢ The length of the message is limited to 20 lines.

6)suid:set user id
➢ suid (Set owner User ID up on execution) is a special type of file permissions
given to a file.
➢ Normally in Linux/Unix when a program runs, it inherits access permissions
from the logged in user.
➢ suid is defined as giving temporary permissions to a user to run a program/file
with the permissions of the file owner rather that the user who runs it.
➢ In simple words users will get file owner’s permissions as well as owner UID and
GID when executing a file/program/command.

7) finger:- finger command displays the user's login name, real name, terminal
name and write status (as a ''*'' after the terminal name if write permission is
Subject name: Operating System and Virtulization Subject code:3141601

denied), idle time, login time, office location and office phone number.
Syntax:- finger [username]

Description :-
-l Force long output format
-s Force short output format

27) What is Paging? Explain paging mechanism in MMU with example[07]


Ans:

Paging
• The program generated address is called as Virtual Addresses and form
the Virtual Address Space.
• Most virtual memory systems use a technique called paging.
• Virtual address space is divided into fixed-size partitions called pages.
• The corresponding units in the physical memory are called as page frames.
• The pages and page frames are always of the same size.
• Size of Virtual Address Space is greater than that of Main memory, so
instead of loading entire address space in to memory to run the process,
MMU copies only required pages into main memory.
• In order to keep the track of pages and page frames, OS maintains a data
structure called page table.
MMU(Memory Management Unit)-
The run time mapping between Virtual address and Physical Address is done by hardware
device known as MMU.
In memory management, Operating System will handle the processes and moves the
processes between disk and memory for execution . It keeps the track of available and used
memory.
Virtual and physical addresses are the same in compile-time and load-time address-binding
schemes. Virtual and physical addresses differ in execution-time address-binding scheme.
The set of all logical addresses generated by a program is referred to as a logical address
space. The set of all physical addresses corresponding to these logical addresses is referred to
as a physical address space.
The runtime mapping from virtual to physical address is done by the memory management
unit (MMU) which is a hardware device. MMU uses following mechanism to convert virtual
address to physical address.
• The value in the base register is added to every address generated by a user process,
which is treated as offset at the time it is sent to memory. For example, if the base
register value is 10000, then an attempt by the user to use address location 100 will be
dynamically reallocated to location 10100.
• The user program deals with virtual addresses; it never sees the real physical addresses.
Subject name: Operating System and Virtulization Subject code:3141601

Instruction-execution cycle Follows steps:


1. First instruction is fetched from memory e.g. ADD A,B
2. Then these instructions are decoded i.e., Addition of A and B
3. And further loading or storing at some particular memory location takes place.

• The Physical Address Space is conceptually divided into a number of fixed-size blocks,
called frames.
• The Logical address Space is also splitted into fixed-size blocks, called pages.
• Page Size = Frame Size
Let us consider an example:
• Physical Address = 12 bits, then Physical Address Space = 4 K words
• Logical Address = 13 bits, then Logical Address Space = 8 K words
• Page size = frame size = 1 K words (assumption)

Address generated by CPU is divided into


• Page number(p): Number of bits required to represent the pages in Logical Address
Space or Page number
• Page offset(d): Number of bits required to represent particular word in a page or page
size of Logical Address Space or word number of a page or page offset.
Physical Address is divided into
• Frame number(f): Number of bits required to represent the frame of Physical Address
Space or Frame number.
• Frame offset(d): Number of bits required to represent particular word in a frame or
frame size of Physical Address Space or word number of a frame or frame offset.
Subject name: Operating System and Virtulization Subject code:3141601

28) What do you mean by mutual exclusion? Explain Peterson’s solution for mutual
exclusion problem[07]
Ans:

Mutual Exclusion:
Mutual exclusion implies that only one process can be inside the critical section at any time. If
any other processes require the critical section, they must wait until it is free.
It is a way of making sure that if one process is using a shared variable or file; the
other process will be excluded (stopped) from doing the same thing.

• A mutual exclusion (mutex) is a program object that prevents simultaneous access to a


shared resource.
• This concept is used in concurrent programming with a critical section, a piece of code
in which processes or threads access a shared resource.
• Only one thread owns the mutex at a time, thus a mutex with a unique name is created
when a program starts.
• When a thread holds a resource, it has to lock the mutex from other threads to prevent
concurrent access of the resource. Upon releasing the resource, the thread unlocks the
mutex.
• Mutex comes into the picture when two threads work on the same data at the same time.
It acts as a lock and is the most basic synchronization tool.
• When a thread tries to acquire a mutex, it gains the mutex if it is available, otherwise
the thread is set to sleep condition.
• Mutual exclusion reduces latency and busy-waits using queuing and context switches.
Mutex can be enforced at both the hardware and software levels.
• Disabling interrupts for the smallest number of instructions is the best way to enforce
mutex at the kernel level and prevent the corruption of shared data structures. If multiple
processors share the same memory, a flag is set to enable and disable the resource
acquisition based on availability.
• The busy-wait mechanism enforces mutex in the software areas. This is furnished with
algorithms such as Dekker's algorithm, the black-white bakery algorithm, Szymanski's
algorithm, Peterson's algorithm and Lamport's bakery algorithm.

Mutual Exclusion Conditions


If we could arrange matters such that no two processes were ever in their critical sections
simultaneously, we could avoid race conditions. We need four conditions to hold to have a
good solution for the critical section problem (mutual exclusion).

• No two processes may at the same moment inside their critical sections.
• No assumptions are made about relative speeds of processes or number of CPUs.
• No process should outside its critical section should block other processes.
• No process should wait arbitrary long to enter its critical section.

Peterson's Algorithm

• This is a much simpler algorithm developed by Peterson. In a remarkable 1981 paper


of less than two pages, Peterson developed and proved versions of his algorithm for
both the 2-process case and the N-process case.
Subject name: Operating System and Virtulization Subject code:3141601

CONCEPT:

• Both the turn variable and the status flags are used, as in Dekker's algorithm. After
setting our flag we immediately give away the turn.
• We then wait for the turn if and only if the other flag is set. By waiting on the and of
two conditions, we avoid the need to clear and reset the flags.

29)Explain virtual machine concept in detail.[07]


Ans:

A virtual machine (VM) is an operating system (OS) or application environment that is


installed on software, which imitates dedicated hardware. The end user has the same experience
on a virtual machine as they would have on dedicated hardware.

Specialized software, called a hypervisor, emulates the PC client or server's CPU,


memory, hard disk, network and other hardware resources completely, enabling virtual
machines to share the resources. The hypervisor can emulate multiple virtual hardware
platforms that are isolated from each other, allowing virtual machines to run Linux and
Windows Server operating systems on the same underlying physical host. Virtualization
limits costs by reducing the need for physical hardware systems. Virtual machines more
efficiently use hardware, which lowers the quantities of hardware and associated maintenance
costs, and reduces power and cooling demand. They also ease management because virtual
hardware does not fail. Administrators can take advantage of virtual environments to
simplify backups, disaster recovery, new deployments and basic system administration tasks.

Virtual machines do not require specialized, hypervisor-specific hardware. Virtualization does,


however, require more bandwidth, storage and processing capacity than a traditional server or
desktop if the physical hardware is going to host multiple running virtual machines. VMs can
easily move, be copied and reassigned between host servers to optimize hardware resource
utilization. Because VMs on a physical host can consume unequal resource quantities -- one
may hog the available physical storage, while another stores little -- IT professionals must
balance VMs with available resources.
VM Management

The use of virtual machines also comes with several important management considerations,
many of which can be addressed through general systems administration best practices and
tools that are designed to manage VMs. There are some risks to consolidation, including
overtaxing resources or potentially experiencing outages on multiple VMs due to one physical
hardware outage. While these cost savings increase as more virtual machines share the same
Subject name: Operating System and Virtulization Subject code:3141601

hardware platform, it does add risk. It is possible to place hundreds of virtual machines on the
same hardware, but if the hardware platform fails, it could take out dozens or hundreds of
virtual machines.

VM Uses

VMs have multiple uses, but in general they are deployed when the need for different operating
systems and processing power are needed for different applications running simultaneously.
For example, if an enterprise wants to test multiple web servers and small databases at the same
time. Similarly, if an enterprise wants to use the same server to run graphics-intensive gaming
software and customer service database.

30)what is virtualization? Explain brief about VMware ESXi, Microsoft Hyper-V in


virtualization.[07]
Ans:

Virtualization is the process of running a virtual instance of a computer system in a layer


abstracted from the actual hardware. Most commonly, it refers to running multiple operating
systems on a computer system simultaneously
VMware ESXi:
VMware ESXi is an operating system-independent hypervisor based on
the VMkernel operating system that interfaces with agents that run on top of it. ESXi stands
for Elastic Sky X Integrated.

ESXi is a type-1 hypervisor, meaning it runs directly on system hardware without the need for
an operating system (OS). Type-1 hypervisors are also referred to as bare-metal hypervisors
because they run directly on hardware.

ESXi is targeted at enterprise organizations. VMware describes an ESXi system as similar to a


stateless compute node. Virtualization administrators can upload state information from a
saved configuration file.

ESXi's VMkernel interfaces directly with VMware agents and approved third-party modules.
Admins can configure VMware ESXi using its console or a vSphere client. They can also check
VMware's hardware compatibility list for approved, supported hardware on which to install
ESXi.

Microsoft Hiper-v
Subject name: Operating System and Virtulization Subject code:3141601

Hyper-V is Microsoft's hardware virtualization product. It lets you create and run a software
version of a computer, called a virtual machine. Each virtual machine acts like a complete
computer, running an operating system and programs. When you need computing resources,
virtual machines give you more flexibility, help save time and money, and are a more efficient
way to use hardware than just running one operating system on physical hardware.

Hyper-V runs each virtual machine in its own isolated space, which means you can run more
than one virtual machine on the same hardware at the same time. You might want to do this to
avoid problems such as a crash affecting the other workloads, or to give different people,
groups or services access to different systems.

Features of hiper-v

Computing environment - A Hyper-V virtual machine includes the same basic parts as a
physical computer, such as memory, processor, storage, and networking. All these parts have
features and options that you can configure different ways to meet different needs. Storage and
networking can each be considered categories of their own, because of the many ways you can
configure them.

Disaster recovery and backup - For disaster recovery, Hyper-V Replica creates copies of
virtual machines, intended to be stored in another physical location, so you can restore the
virtual machine from the copy. For backup, Hyper-V offers two types. One uses saved states
and the other uses Volume Shadow Copy Service (VSS) so you can make application-
consistent backups for programs that support VSS.

Optimization - Each supported guest operating system has a customized set of services and
drivers, called integration services, that make it easier to use the operating system in a Hyper-
V virtual machine.

Portability - Features such as live migration, storage migration, and import/export make it
easier to move or distribute a virtual machine.

Remote connectivity - Hyper-V includes Virtual Machine Connection, a remote connection


tool for use with both Windows and Linux. Unlike Remote Desktop, this tool gives you console
access, so you can see what's happening in the guest even when the operating system isn't
booted yet.

Security - Secure boot and shielded virtual machines help protect against malware and other
unauthorized access to a virtual machine and its data.
Subject name: Operating System and Virtulization Subject code:3141601

31) Give the Difference between Multi-Programming, Multiprocessing System. Write


different operating system services[07]

Ans:

Sr. No. Multiprocessing Multiprogramming

1 Multiprocessing refers to processing of Multiprogramming keeps several programs in main


multiple processes at same time by memory at the same time and execute them concurrently
multiple CPUs. utilizing single CPU.

2 It utilizes multiple CPUs. It utilizes single CPU.

3 It permits parallel processing. Context switching takes place.

4 Less time taken to process the jobs. More Time taken to process the jobs.

5 It facilitates much efficient utilization of Less efficient than multiprocessing.


devices of the computer system.

6 Usually more expensive. Such systems are less expensive.

operating system services

•Program execution
• I/O operations
• File System manipulation
• Communication
• Error Detection
• Resource Allocation
• Protection
Program execution
Operating systems handle many kinds of activities from user programs to system programs
like printer spooler, name servers, file server, etc. Each of these activities is encapsulated as a
process.
I/O Operation
An I/O subsystem comprises of I/O devices and their corresponding driver software. Drivers
hide the peculiarities of specific hardware devices from the users.
File system manipulation
A file represents a collection of related information. Computers can store files on the disk
(secondary storage), for long-term storage purpose. Examples of storage media include
magnetic tape, magnetic disk and optical disk drives like CD, DVD. Each of these media has
its own properties like speed, capacity, data transfer rate and data access methods.
Subject name: Operating System and Virtulization Subject code:3141601

Communication
In case of distributed systems which are a collection of processors that do not share memory,
peripheral devices, or a clock, the operating system manages communications between all the
processes. Multiple processes communicate with one another through communication lines in
the network.
Error handling
Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices or in the
memory hardware. Following are the major activities of an operating system with respect to
error handling −

• The OS constantly checks for possible errors.


• The OS takes an appropriate action to ensure correct and consistent computing.
Resource Management
In case of multi-user or multi-tasking environment, resources such as main memory, CPU
cycles and files storage are to be allocated to each user or job. Following are the major
activities of an operating system with respect to resource management −

• The OS manages all kinds of resources using schedulers.


• CPU scheduling algorithms are used for better utilization of CPU.
Protection
Considering a computer system having multiple users and concurrent execution of multiple
processes, the various processes must be protected from each other's activities.

32)Explain Linux VServer Virtual Machine Architecture.[07]

Ans:

hypervisor VM is a complete operating system whose relationship to Core Four hardware


resources is fully virtualized: it thinks it’s running on its own computer.

A hypervisor installs a VM from the same ISO image you would download and use to install
an operating system directly onto an empty physical hard drive.

A container, on the other hand is, effectively, an application, launched from a script-like
template, that thinks it’s an operating system. In container technologies (like LXC and Docker),
containers are nothing more than software and resource (files, processes, users) abstractions
that rely on the host kernel and a representation of the “core four” hardware resources (i.e,
CPU, RAM, network and storage) for everything they do.
Subject name: Operating System and Virtulization Subject code:3141601

Of course, since containers are, effectively, isolated extensions of the host kernel, virtualizing
Windows (or even older or newer Linux releases running incompatible versions of libc) on,
say, an Ubuntu 16.04 host, is impossible. But the technology does allow for incredibly
lightweight and versatile compute opportunities.

Migration

The virtualization model also permits a very wide range of migration, backup, and cloning
operations — even from running systems (V2V). Since the software resources that define and
drive a virtual machine are so easily identified, it usually doesn’t take too much effort to
duplicate whole server environments in multiple locations and for multiple purposes.

Sometimes it’s no more complicated than creating an archive of a virtual file system on one
host, unpacking it within the same path on a different host, 7checking the basic network
settings, and firing it up. Most platforms, offer a single command line operation to move guests
between hosts.

Migrating deployments from physical servers to virtualized environments (P2V) can


sometimes be a bit more tricky. Even creating a cloned image of a simple physical server and
importing it into an empty VM can involve some complexity. And once that’s done, you may
still need to make considerable adjustments to the design to take full advantage of all the
functionality the virtualization has to offer. Depending on the operating system that you are
migrating, you might also need to incorporate paravirtualized drivers into the process to allow
the OS to run properly in its new home.
Subject name: Operating System and Virtulization Subject code:3141601

33)Explain Android Virtual Machine[07]

Ans:

Android is one of the most common operating systems out there. It has proven to dominate the
smartphone market but is yet to get its way into the world of PCs. This is not to mean that you
cannot enjoy having an Android environment on your computer. You can do so using
virtualization software.

There are many reasons why you would want to have the latest Android 8.1 Oreo on your
computer. It could be that you are a developer and have decided to venture into Android apps.
You will need an emulator to test the apps you develop. You can use virtualization software to
create an Android device-like environment on which you can try out the application. Perhaps
you are just curious what the Oreo has to offer. You can find out by running it on a virtual
machine. Doing this should actually be the first step before any Android user upgrades their
Operating System.

An Android virtual machine can be created using various virtualization software solutions
available. There are many of them but only two have the very best features. These are
VirtualBox and VMware. Their free versions are feature-laden while their paid versions make
the impossible possible. Users get access to every feature of Android just like it works on a
phone. Developers will appreciate the fact that they can create different Android device-like
virtual machines so they can test apps on devices of different specifications. They will be able
to easily create virtual machines with different RAM, ROM, and other specs so as to determine
how the app will work on different Android phones.
The standard Java API and virtual machine are mainly designed for desktop as well as server
systems. They are not that compatible with mobile devices. Because of this, Google has created
a different API and virtual machine for mobile devices. This is known as the Dalvik virtual
machine.
The Dalvik virtual machine is a key component of the Android runtime and is a part of JVM
(Java Virtual Machine) developed specially for Android. The Dalvik virtual machine uses
features that are quite important in Java such as memory management, multi-threading etc. The
programs in Java are first converted into JVM and this is then interpreted into the DVM
bytecode.
Details about both the JVM and the DVM are given as follows:
Java Virtual Machine
The Java Virtual Machine is an application that provides the run-time environment to execute
the Java bytecode. It converts the bytecode into machine code. The Java Virtual Machine can
perform multiple operations like loading the code, verifying the code, executing the code,
providing run-time environment etc.
A diagram that illustrates the working of the Java Virtual Machine is given as follows:
Subject name: Operating System and Virtulization Subject code:3141601

34) Explain Contiguous and Linked File Allocation Methods[07]

Ans:

Contiguous Allocation
• The simplest allocation scheme is to store each file as a contiguous run of disk block.
• We see an example of contiguous storage allocation in fig. 7-5.
• Here the first 40 disk blocks are shown, starting with block 0 on the left. Initially, the
disk was empty.
Each file occupies a contiguous address space on disk.
Assigned disk address is in linear order.
Easy to implement.
External fragmentation is a major issue with this type of allocation technique.
Subject name: Operating System and Virtulization Subject code:3141601

• Advantages

▪ First it is simple to implement because keeping track of where a file’s blocks are is
reduced to remembering two numbers: The disk address of the first block and the
number of blocks in the file.
▪ Second, the read performance is excellent because the entire file can be read from
the disk in a single operation. Only one seek is needed (to the first block), so data
comes in at the full bandwidth of the disk.
▪ Thus contiguous allocation is simple to implement and has high performance.

Drawbacks

▪ Over the course of time, the disk becomes fragmented.


▪ Initially, fragmentation is not a problem, since each new file can be written at the
end of the disk, following the previous one.
▪ However, eventually the disk will fill up and it will become necessary to either
compact the disk, which is expensive, or to reuse the free space in the holes.
▪ Reusing the space requires maintaining a list of hole.
▪ However, when a new file is to be created, it is necessary to know its final size in
order to choose a hole of the correct size to place it in.
• There is one situation in which continuous allocation is feasible and in fact, widely
used: on CD-ROMs. Here all the file sizes are known in advance and will never
change during use of CD-ROM file system.

Linked List Allocation

Each file carries a list of links to disk blocks.

Directory contains link / pointer to first block of a file.

No external fragmentation

Effectively used in sequential access file.

Inefficient in case of direct access file.


Subject name: Operating System and Virtulization Subject code:3141601

• Another method for storing files is to keep each one as a linked list of the disk blocks,
as shown in fig
• The first word of each block is used as a pointer to the next one. The rest of the block is
for data.
• Unlike contiguous allocation, every disk block can be used in this method. No space
is lost to disk fragmentation.
• It is sufficient for a directory entry to store only disk address of the first block, rest
can be found starting there.

Drawbacks

• Although reading a file sequentially is straightforward, random access is extremely


slow. To get to block n, the operating system has to start at the beginning and read
the n-1 blocks prior to it, one at a time.
• The amount of data storage in a block is no longer a power of two because the pointer
takes up a few bytes. While having an unusual size is less efficient because many
programs read and write in blocks whose size is a power of two.
• With the first few bytes of each block occupied to a pointer to the next block, reads
of the full block size require acquiring and concatenating information from two disk
blocks, which generates extra overhead due to the copying.

35)write different types of system calls.[07]


Ans:

System Calls

The system call provides an interface to the operating system services.

Application developers often do not have direct access to the system calls, but can access them
through an application programming interface (API). The functions that are included in the
API invoke the actual system calls. By using the API, certain benefits can be gained:
Subject name: Operating System and Virtulization Subject code:3141601

• Portability: as long a system supports an API, any program using that API can compile and
run.
• Ease of Use: using the API can be significantly easier then using the actual system call.

System Call Parameters

Three general methods exist for passing parameters to the OS:

1. Parameters can be passed in registers.


2. When there are more parameters than registers, parameters can be stored in a block and the
block address can be passed as a parameter to a register.
3. Parameters can also be pushed on or popped off the stack by the operating system.

Types of System Calls

There are 5 different categories of system calls:

process control, file manipulation, device manipulation, information maintenance and


communication.

Process Control

A running program needs to be able to stop execution either normally or abnormally. When
execution is stopped abnormally, often a dump of memory is taken and can be examined with
a debugger.

File Management

Some common system calls are create, delete, read, write, reposition, or close. Also, there is a
need to determine the file attributes – get and set file attribute. Many times the OS provides an
API to make these system calls.
Subject name: Operating System and Virtulization Subject code:3141601

Device Management

Process usually require several resources to execute, if these resources are available, they will
be granted and control returned to the user process. These resources are also thought of as
devices. Some are physical, such as a video card, and others are abstract, such as a file.

User programs request the device, and when finished they release the device. Similar to files,
we can read, write, and reposition the device.

Information Management

Some system calls exist purely for transferring information between the user program and the
operating system. An example of this is time, or date.

The OS also keeps information about all its processes and provides system calls to report this
information.

Communication

There are two models of interprocess communication, the message-passing model and the
shared memory model.

• Message-passing uses a common mailbox to pass messages between processes.


• Shared memory use certain system calls to create and gain access to create and gain access to
regions of memory owned by other processes. The two processes exchange information by
reading and writing in the shared data.

36)write about semaphores.write benefits of threads and difference between thread and
process.[07]

Ans:

Semaphore:
A semaphore is a variable that provides an abstraction for controlling
access of a shared resource by multiple processes in a parallel
programming environment.
There are 2 types of semaphores:
1. Binary semaphores: - Binary semaphores have 2 methods
associated with it (up, down / lock, unlock). Binary semaphores
can take only 2 values (0/1). They are used to acquire locks.
2. Counting semaphores: - Counting semaphore can have possible
values more than two.
Subject name: Operating System and Virtulization Subject code:3141601

S.N. Process Thread

1 Process is heavy weight or resource Thread is light weight, taking lesser resources than a
intensive. process.

2 Process switching needs interaction Thread switching does not need to interact with
with operating system. operating system.

3 In multiple processing environments, All threads can share same set of open files, child
each process executes the same code processes.
but has its own memory and file
resources.

4 If one process is blocked, then no other While one thread is blocked and waiting, a second
process can execute until the first thread in the same task can run.
process is unblocked.

5 Multiple processes without using Multiple threaded processes use fewer resources.
threads use more resources.

Thread benefits:

• Threads minimize the context switching time.


• Use of threads provides concurrency within a process.
• Efficient communication.
• It is more economical to create and context switch threads.
• Threads allow utilization of multiprocessor architectures to a greater scale and
efficiency.

37)write about resource allocation graph.[07]


Subject name: Operating System and Virtulization Subject code:3141601

Ans:

As Banker’s algorithm using some kind of table like allocation, request, available all that thing
to understand what is the state of the system. Similarly, if you want to understand the state of
the system instead of using those table, actually tables are very easy to represent and understand
it, but then still you could even represent the same information in the graph. That graph is
called Resource Allocation Graph (RAG).

So, resource allocation graph is explained to us what is the state of the system in terms
of processes and resources. Like how many resources are available, how many are allocated
and what is the request of each process. Everything can be represented in terms of the
diagram. One of the advantages of having a diagram is, sometimes it is possible to see a
deadlock directly by using RAG, but then you might not be able to know that by looking at
the table. But the tables are better if the system contains lots of process and resource and
Graph is better if the system contains less number of process and resource.

We know that any graph contains vertices and edges. So RAG also contains vertices and
edges. In RAG vertices are two type –
1. Process vertex – Every process will be represented as a process vertex.Generally, the
process will be represented with a circle.
2. Resource vertex – Every resource will be represented as a resource vertex. It is also two
type –

• Single instance type resource – It represents as a box, inside the box, there will be one dot.So
the number of dots indicate how many instances are present of each resource type.
• Multi-resource instance type resource – It also represents as a box, inside the box, there will
be many dots present.

Now coming to the edges of RAG.There are two types of edges in RAG –
Subject name: Operating System and Virtulization Subject code:3141601

1. Assign Edge – If you already assign a resource to a process then it is called Assign edge.
2. Request Edge – It means in future the process might want some resource to complete the
execution, that is called request edge.

So, if a process is using a resource, an arrow is drawn from the resource node to the process
node. If a process is requesting a resource, an arrow is drawn from the process node to the
resource node.
Example 1 (Single instances RAG) –

If there is a cycle in the Resource Allocation Graph and each resource in the cycle provides
only one instance, then the processes will be in deadlock. For example, if process P1 holds
resource R1, process P2 holds resource R2 and process P1 is waiting for R2 and process P2 is
waiting for R1, then process P1 and process P2 will be in deadlock.
Subject name: Operating System and Virtulization Subject code:3141601

38) Explain the goals of I/O software[07]

Ans:

• I/O software is often organized in the following layers −


• User Level Libraries − This provides simple interface to the user program to perform
input and output. For example, stdio is a library provided by C and C++ programming
languages.
• Kernel Level Modules − This provides device driver to interact with the device
controller and device independent I/O modules used by the device drivers.
• Hardware − This layer includes actual hardware and hardware controller which
interact with the device drivers and makes hardware alive.
o Device Independence:
• It should be possible to write programs that can access any I/O devices
without having to specify device in advance.
• For example, a program that reads a file as input should be able to read a file
on a floppy disk, on a hard disk, or on a CD-ROM, without having to modify
the program for each different device.

o Uniform naming:
• Name of file or device should be some specific string or number. It must not
depend upon device in any way.
• In UNIX, all disks can be integrated in file system hierarchy in arbitrary way
so user need not be aware of which name corresponds to which device.
• All files and devices are addressed the same way: by a path name.
o Error handling:
• Error should be handled as close to hardware as possible. If any controller
generates error then it tries to solve that error itself. If controller can’t solve
that error then device driver should handle that error, perhaps by reading all
blocks again.
• Many times when error occur, error solve in lower layer. If lower layer are
not able to handle error problem should be told to upper layer.
• In many cases error recovery can be done at a lower layer without the upper
layers even knowing about error.
o Synchronous versus Asynchronous:
• Most of devices are asynchronous device. CPU starts transfer and goes off
to do something else until interrupt occurs. I/O Software needs to support
both the types of devices.
• User programs are much easier to write if the I/O operations are blocking.
• It is up to the operating system to make operations that are actually interrupt-
driven look blocking to the user programs.
Subject name: Operating System and Virtulization Subject code:3141601

o Buffering:
• Data comes in main memory cannot be stored directly. For example data
packets come from the network cannot be directly stored in physical
memory. Packets have to be put into output buffer for examining them.
• Some devices have several real-time constraints, so data must be put into output buffer
in advance to decouple the rate at which buffer is filled and the rate at which it is
emptied, in order to avoid buffer under runs.
• Buffering involved considerable copying and often has major impact on I/O
performance.

39) Explain Linux kernel and its functions in brief[07]

Ans:

The Linux Kernel is the heart of the operating system. Without the Kernel, we simply can not
perform any task, since it is mainly responsible for the software and hardware of our computer
working correctly and can interact with each other.

Kernel component code executes in a special privileged mode called kernel mode with full
access to all resources of the computer. This code represents a single process, executes in single
address space and do not require any context switch and hence is very efficient and fast. Kernel
runs each processes and provides system services to processes, provides protected access to
hardware to processes.

• At the lowest level it contains interrupt handlers which are the primary
way for interacting with the device, and low level dispatching
Subject name: Operating System and Virtulization Subject code:3141601

mechanism.
• At the highest level the I/O operations are all integrated under a virtual
file system and at lowest level, all I/O operations pass through some
device driver.
• All Linux drivers are classified as either a character device driver or
block device drivers, with the main difference that random accesses
are allowed on the block devices and not on the character devices.
• Technically network devices are really character devices, but they are
handled somewhat differently, so It is preferable to separate them.
• On the top of the disk drivers is the I/O scheduler who is responsible
for ordering and issuing disk operation request in a way that tries to
converse waste full disk head movement.
• Memory management tasks include maintaining the virtual to
physical memory mappings, maintaining a cache of the recently
accessed pages and implementing a good page replacement policy.
Kernel functions
The main functions of the Kernel are the following:

• Manage RAM memory, so that all programs and running processes can work.
• Manage the processor time, which is used by running processes.
• Manage access and use of the different peripherals connected to the computer.

40) Explain evolution of operating system in detail with suitable diagrams[07]

Ans:

Computer software is roughly divided into two main categories - application software and
operating system software. Applications are programs used by people to carry out various tasks,
such as writing a letter, creating a financial spreadsheet, or querying a customer database.
Operating systems, on the other hand, manage the computer system on which these applications
run. Without an operating system, it would be impossible to run any of the application software
we use every day, or even to boot up the computer.

The early computers of the late 1940s had no operating system. Human operators scheduled
jobs for execution and supervised the use of the computer’s resources. Because these early
computers were very expensive, the main purpose of an operating system in these early days
was to make the hardware as efficient as possible. Now, computer hardware is relatively cheap
by comparison with the cost of the personnel required to operate it, so the purpose of the
operating system has evolved to encompass the task of making the user as efficient as possible.
Subject name: Operating System and Virtulization Subject code:3141601

An operating system functions in much the same way as other software. It is a collection of
programs that are loaded into memory and executed by the processor. When the computer is
powered down it exists only as a collection of files on a disk drive. The main difference is that,
once it is running, it has a large measure of control over both the processor itself and other
system resources. In fact, the operating system only relinquishes control to allow other
programs to execute. An application program is frequently given control of the processors for
short periods of time in order to carry out its allotted task, but control always reverts to the
operating system, which can then either use the processor itself or allocate it to another
program.

The operating system, then, controls the operation of the computer. This includes determining
which programs may use the processor at any given time, managing system resources such as
working memory and secondary storage, and controlling access to input and output devices. In
addition to controlling the system itself, the operating must provide an interface between the
system and the user which allows the user to interact with the system in an optimal manner.

Increasingly these days, the operating system provides sophisticated networking functionality,
and is expected to be compatible with a growing range of communication devices and other
peripherals. In recent years, the implementation of an application programming interface (API)
has been a feature of most operating systems, making the process of writing application
programs for those operating systems much easier, and creating a standardised application
environment.

Computer software is roughly divided into two main categories - application software and
operating system software. Applications are programs used by people to carry out various tasks,
such as writing a letter, creating a financial spreadsheet, or querying a customer database.
Subject name: Operating System and Virtulization Subject code:3141601

Operating systems, on the other hand, manage the computer system on which these applications
run. Without an operating system, it would be impossible to run any of the application software
we use every day, or even to boot up the computer.

An operating system functions in much the same way as other software. It is a collection of
programs that are loaded into memory and executed by the processor. When the computer is
powered down it exists only as a collection of files on a disk drive. The main difference is that,
once it is running, it has a large measure of control over both the processor itself and other
system resources. In fact, the operating system only relinquishes control to allow other
programs to execute. An application program is frequently given control of the processors for
short periods of time in order to carry out its allotted task, but control always reverts to the
operating system, which can then either use the processor itself or allocate it to another
program.

The operating system, then, controls the operation of the computer. This includes determining
which programs may use the processor at any given time, managing system resources such as
working memory and secondary storage, and controlling access to input and output devices. In
addition to controlling the system itself, the operating must provide an interface between the
system and the user which allows the user to interact with the system in an optimal manner.

Increasingly these days, the operating system provides sophisticated networking functionality,
and is expected to be compatible with a growing range of communication devices and other
peripherals. In recent years, the implementation of an application programming interface (API)
has been a feature of most operating systems, making the process of writing application
programs for those operating systems much easier, and creating a standardised application
environment.

You might also like