You are on page 1of 28

 1.What is different types of os? Describe the all types with example.

 An operating system is system software that ensures users can control the activities of both the
software and hardware and make use of the resources of the computer. Network operating
systems are the software that allows several computers to communicate and share files and
hardware devices amongst themselves. The operating system helps manage resources such as
RAM, ROM, hard disks, etc. It also allows users to perform directed tasks like arithmetic
calculations, data processing, etc. The OS allows users to perform various tasks like data input,
processing operations, and accessing output. Examples of Operating systems includes Microsoft
Windows,macOS, Linux, Unix, etc
 Types of operating system
 1. Multiprocessor OS
 A multiprocessor operating system is an operating system that uses multiple processors to
improve performance. This operating system is commonly found on computer systems with more
than one CPU. Multiprocessor systems improve system performance by allowing the execution of
tasks on multiple processors simultaneously. Overall reduces the time it takes to complete
specific tasks.

Advantages

 It allows the system to run multiple programs simultaneously.


 Beneficial for tasks that need to use all of the processor’s resources, such as games, scientific
calculations, and financial simulations.

Disadvantages

 They require additional hardware, such as processors and memory, making a system more expensive.

2. Multi-programming OS

The operating system which can run multiple processes on a single processor is called a
multiprogramming operating system. There are different programs that want to get executed. So these
programs are kept in the ready queue. And are assigned to the CPU one by one. If one process gets
blocked then other processes from the ready queue are assigned to the CPU. The aim of this is optimal
resource utilization and more CPU utilization. In the below figure, different processes are there in
RAM(main memory). Some processes are waiting for the CPU, and process 2(which was previously
executing) is now doing I/O operations. So CPU shifted to execute process 1.

3. Distributed OS
A distributed operating system is an operating system that is designed to operate on a network of
computers. Distributed systems are usually used to distribute software applications and data. Distributed
systems are also used to manage the resources of multiple computers. Users could be at different
sites.Multiple computers are connected via a single communication channel.Every system have its own
processor and memory. Resources like disk,computer,CPU,network interface, nodes, etc are sharedamong
different computer at different locations.It increases data availability in the entire system.

Advantages

 It is more reliable as a failure of one system will not impact the other computers or the overall system.
 All computers work independently.
 Resources are shared so there is less cost overall.
 The system works at a higher speed as resources are shared
 The host system has less load.
 Computers can be easily added to the system.

Disadvantages

 Costly setup.
 If the server is failed, then the whole system will fail.
 Complex software is used for such a system

4. Multitasking OS

Multi-tasking operating systems are designed to enable multiple applications to run simultaneously.
Multi-tasking operating systems allow multiple users to work on the same document or application
simultaneously. 

For example, A user running antivirus software, searching the internet, and playing a song
simultaneously. Then the user is using a multitasking OS.
 2.what is multiprogramming system and batch system?

1. Batch Processing : A series of jobs are executed without any human intervention in Batch
processing system. In this set of jobs with similar needs are batched together and inputted to
the computer for execution. It is also called as Simple Batch System. It is slower in processing
than Multiprogramming system. Advantages of Batch Processing :
 It manages large repeated work easily.
 No special hardware and system support required to input data in batch systems.
 It can be shared by multiple users.
 Very less idle time of the batch system.
 Enables us to manage the efficiently large load of work.
Disadvantages of Batch Processing :
 It has more turnaround time.
 Non linear behavior.
 Irreversible behavior.
 Due to any mistake, it may happen any job can go infinite loop.
 proves to be costly sometimes.
2. Multiprogramming : Multiprogramming operating system allows to execute multiple
processes by monitoring their process states and switching in between processes. It executes
multiple programs to avoid CPU and memory underutilization. It is also called as Multiprogram
Task System. It is faster in processing than Batch Processing system. Advantages of
Multiprogramming :
 CPU never becomes idle
 Efficient resources utilization
 Response time is shorter
 Short time jobs completed faster than long time jobs
 Increased Throughput
Disadvantages of Multiprogramming :
 Long time jobs have to wait long
 Tracking all processes sometimes difficult
 CPU scheduling is required
 Requires efficient memory management
 User interaction not possible during program execution
 3.what is the difference b/w batch system and time sharing
system?
 Batch operating systems: A batch is a sequence of jobs. Such batch is submitted
to batch processing operating systems moreover output would show some later time
in the form of a program or like program error. For speed-up processing same jobs
are batched together. The main task of batch operating systems is to transfer control
automatically from one job to next. Now there the operating is always in the memory.
 (i) This is lack of interaction between job and user while executing
 (ii) Turnaround time is more.
 (iii) CPU is often idle; due to 1/0 devices are very slow.
 Time sharing: It is also called as multi tasking, is a logical execution of
multiprogramming. Multiple jobs are executed through the CPU switching among
them. Now there the computer system gives on line communication in between the
user and the system.
 Now there the CPU is never idle. Time shared operating system permits many users
to share the computer concurrently.
 Time sharing systems needs some sort of memory protection and management.

4. (a)Real time system


 (b)Multitasking & multiuser
 (c)Multiprocessing & real time OS
(a) A real-time operating system (RTOS) is a special-purpose operating system used in
computers that has strict time constraints for any job to be performed. It is employed
mostly in those systems in which the results of the computations are used to
influence a process while it is executing. Whenever an event external to the computer
occurs, it is communicated to the computer with the help of some sensor used to
monitor the event. The sensor produces the signal that is interpreted by the
operating system as an interrupt. On receiving an interrupt, the operating system
invokes a specific process or a set of processes to serve the interrupt.
(b) (i)Multitasking term used in a modern computer system. It is a logical extension of a
multiprogramming system that enables the execution of multiple programs
simultaneously. In an operating system, multitasking allows a user to perform more
than one computer task simultaneously. Multiple tasks are also known as processes
that share similar processing resources like a CPU. The operating system keeps track
of where you are in each of these jobs and allows you to transition between them
without losing data.
(ii)A multi-user operating system is an operating system that permits several users
to access a single system running to a single operating system. These systems are
frequently quite complex, and they must manage the tasks that the various users
connected to them require. Users will usually sit at terminals or computers connected
to the system via a network and other system machines like printers. A multi-user
operating system varies from a connected single-user operating system in that each
user accesses the same operating system from different machines.
(c)Multiprocessing:In operating systems, to improve the performance of
more than one CPU can be used within one computer system called
Multiprocessor operating system.
Multiple CPUs are interconnected so that a job can be divided among them
for faster execution. When a job finishes, results from all CPUs are collected
and compiled to give the final output. Jobs needed to share main memory
and they may also share other system resources among themselves. Multiple
CPUs can also be used to run multiple jobs simultaneously.
For Example: UNIX Operating system is one of the most widely used
multiprocessing systems.

Unit – 2

1.What is the function of kernel in os? Describe the various


components of os with reference to DOS
In computer science, Kernel is a computer program that is a core or heart of an
operating system. Before discussing kernel in detail, let's first understand its basic, i.e.,
Operating system in a computer.

Operating System

An operating system or OS is system software that works as an interface between hardware


components and end-user. It enables other programs to run. Each computer system,
whether it is desktop, laptop, tablet, or smartphone, all must have an OS to provide basic
functionalities for the device. Some widely used operating systems are Windows, Linux,
MacOS, Android, iOS, etc.

What is Kernel in Operating System?

o As discussed above, Kernel is the core part of an OS(Operating system); hence it has full
control over everything in the system. Each operation of hardware and software is
managed and administrated by the kernel.
o It acts as a bridge between applications and data processing done at the hardware level.
It is the central component of an OS.
o It is the part of the OS that always resides in computer memory and enables the
communication between software and hardware components.
o It is the computer program that first loaded on start-up the system (After the
bootloader). Once it is loaded, it manages the remaining start-ups. It also manages
memory, peripheral, and I/O requests from software. Moreover, it translates all I/O
requests into data processing instructions for the CPU. It manages other tasks also
such as memory management, task management, and disk management.
o A kernel is kept and usually loaded into separate memory space, known as protected
Kernel space. It is protected from being accessed by application programs or less
important parts of OS.
o Other application programs such as browser, word processor, audio & video player use
separate memory space known as user-space.
o Due to these two separate spaces, user data and kernel data don't interfere with each
other and do not cause any instability and slowness.

Functions of a Kernel

A kernel of an OS is responsible for performing various functions and has control over the
system. Some main responsibilities of Kernel are given below:

o Device Management
To perform various actions, processes require access to peripheral devices such as a
mouse, keyboard, etc., that are connected to the computer. A kernel is responsible for
controlling these devices using device drivers. Here, a device driver is a computer
program that helps or enables the OS to communicate with any hardware device.
A kernel maintains a list of all the available devices, and this list may be already known,
configured by the user, or detected by OS at runtime.
o Memory Management
The kernel has full control for accessing the computer's memory. Each process requires
some memory to work, and the kernel enables the processes to safely access the
memory. To allocate the memory, the first step is known as virtual addressing, which is
done by paging or segmentation. Virtual addressing is a process of providing virtual
address spaces to the processes. This prevents the application from crashing into each
other.
o Resource Management
One of the important functionalities of Kernel is to share the resources between various
processes. It must share the resources in a way that each process uniformly accesses the
resource.
The kernel also provides a way for synchronization and inter-process communication
(IPC). It is responsible for context switching between processes.
o Accessing Computer Resources
A kernel is responsible for accessing computer resources such as RAM and I/O
devices. RAM or Random-Access Memory is used to contain both data and
instructions. Each program needs to access the memory to execute and mostly wants
more memory than the available. For such a case, Kernel plays its role and decides which
memory each process will use and what to do if the required memory is not available.
The kernel also allocates the request from applications to use I/O devices such as
keyboards, microphones, printers, etc.

2.What is BIOS & DOS interrupt?

BIOS is what starts up the hardware in the computer, runs a check to be sure that the
processor, RAM memory, etc. are all working. It handles everything up to the point where it
has to load an Operating System.

DOS is one of the Operating Systems that BIOS can load. It has not been used since the mid
90’s. Microsoft replaced it with Windows, which is an actual Operating System itself and
does not use DOS. There is a Command Prompt in Windows does does allow you to run
some of the commands that use to be part of DOS, but DOS itself does is not used any
more.

BIOS is what boots the hardware. (D)OS is what loads the software once the hardware is up.
3.Explain the various mode of process state? Draw the process state diagram.
Process States

State Diagram

The process, from its creation to completion, passes through various states. The minimum
number of states is five.

The names of the states are not standardized although the process may be in one of the
following states during execution.

1. New
A program which is going to be picked up by the OS into the main memory is called a new
process.

2. Ready

Whenever a process is created, it directly enters in the ready state, in which, it waits for the CPU
to be assigned. The OS picks the new processes from the secondary memory and put all of them
in the main memory.

The processes which are ready for the execution and reside in the main memory are called ready
state processes. There can be many processes present in the ready state.

3. Running

One of the processes from the ready state will be chosen by the OS depending upon the
scheduling algorithm. Hence, if we have only one CPU in our system, the number of running
processes for a particular time will always be one. If we have n processors in the system then we
can have n processes running simultaneously.

4. Block or wait

From the Running state, a process can make the transition to the block or wait state depending
upon the scheduling algorithm or the intrinsic behavior of the process.

When a process waits for a certain resource to be assigned or for the input from the user then
the OS move this process to the block or wait state and assigns the CPU to the other processes.

5. Completion or termination

When a process finishes its execution, it comes in the termination state. All the context of the
process (Process Control Block) will also be deleted the process will be terminated by the
Operating system.

6. Suspend ready

A process in the ready state, which is moved to secondary memory from the main memory due
to lack of the resources (mainly primary memory) is called in the suspend ready state.

If the main memory is full and a higher priority process comes for the execution then the OS
have to make the room for the process in the main memory by throwing the lower priority
process out into the secondary memory. The suspend ready processes remain in the secondary
memory until the main memory gets available.

7. Suspend wait
Instead of removing the process from the ready queue, it's better to remove the blocked
process which is waiting for some resources in the main memory. Since it is already waiting for
some resource to get available hence it is better if it waits in the secondary memory and make
room for the higher priority process. These processes complete their execution once the main
memory gets available and their wait is finished.

Operations on the Process

1. Creation

Once the process is created, it will be ready and come into the ready queue (main memory) and
will be ready for the execution.

2. Scheduling

Out of the many processes present in the ready queue, the Operating system chooses one
process and start executing it. Selecting the process which is to be executed next, is known as
scheduling.

3. Execution

Once the process is scheduled for the execution, the processor starts executing it. Process may
come to the blocked or wait state during the execution then in that case the processor starts
executing the other processes.

4. Deletion/killing

Once the purpose of the process gets over then the OS will kill the process. The Context of the
process (PCB) will be deleted and the process gets terminated by the Operating system.

4.Difference between Operating System and Kernel


S.No. Operating System Kernel

1. An operating system is one of the most important Kernel is a core element of the OS that
components that helps in managing computer software converts the user query into the
and hardware resources. machine language.

2 It is like system software. It is system software which is an


important component of the operating
system.

3 One major purpose of an operating system is to bestow The main purpose of a kernel is to
security. manage memory, disk and task.

4 It provides an interface between hardware and user. It provides an interface between


application and hardware.

5 Without an operating system a computer cannot be run. Without a kernel, an operating system
cannot be run.

6 Single OS, multiuser OS, multiprocessor OS, real time OS, Monolithic and Micro kernels are the
and Distributed OS are the types of operating system. types of kernel.

7 It is the first program to begin when the computer boots It is the first program to start when the
up. operating system runs.

UNIT – 3
1.what is PCB & importance?
While creating a process the operating system performs several operations. To identify the processes, it
assigns a process identification number (PID) to each process. As the operating system supports multi-
programming, it needs to keep track of all the processes. For this task, the process control block (PCB) is
used to track the process’s execution status. Each block of memory contains information about the
process state, program counter, stack pointer, status of opened files, scheduling algorithms, etc. All these
information is required and must be saved when the process is switched from one state to another. When
the process makes a transition from one state to another, the operating system must update information in
the process’s PCB.
A process control block (PCB) contains information about the process, i.e. registers, quantum, priority,
etc. The process table is an array of PCB’s, that means logically contains a PCB for all of the current
processes in the system.
 Pointer – It is a stack pointer which is required to be saved when the process is switched from one
state to another to retain the current position of the process.
 Process state – It stores the respective state of the process.
 Process number – Every process is assigned with a unique id known as process ID or PID which
stores the process identifier.
 Program counter – It stores the counter which contains the address of the next instruction that is to
be executed for the process.
 Register – These are the CPU registers which includes: accumulator, base, registers and general
purpose registers.
 Memory limits – This field contains the information about memory management system used by
operating system. This may include the page tables, segment tables etc.
 Open files list – This information includes the list of files opened for a process.
Miscellaneous accounting and status data – This field includes information about the amount of CPU
used, time constraints, jobs or process number, etc.
The process control block stores the register content also known as execution content of the processor
when it was blocked from running. This execution content architecture enables the operating system to
restore a process’s execution context when the process returns to the running state. When the process
makes a transition from one state to another, the operating system updates its information in the process’s
PCB. The operating system maintains pointers to each process’s PCB in a process table so that it can
access the PCB quickly.

2. What is thread & benefit?


Threads in Operating System

A thread is a single sequential flow of execution of tasks of a process so it is also known as


thread of execution or thread of control. There is a way of thread execution inside the process of
any operating system. Apart from this, there can be more than one thread inside a process. Each
thread of the same process makes use of a separate program counter and a stack of activation
records and control blocks. Thread is often referred to as a lightweight process.
The process can be split down into so many threads. For example, in a browser, many tabs can
be viewed as threads. MS Word uses many threads - formatting text from one thread,
processing input from another thread, etc.

Need of Thread:

o It takes far less time to create a new thread in an existing process than to create a new
process.
o Threads can share the common data, they do not need to use Inter- Process
communication.
o Context switching is faster when working with threads.
o It takes less time to terminate a thread than a process.

Types of Threads

In the operating system, there are two types of threads.

1. Kernel level thread.


2. User-level thread.

User-level thread

The operating system does not recognize the user-level thread. User threads can be easily
implemented and it is implemented by the user. If a user performs a user-level thread blocking
operation, the whole process is blocked. The kernel level thread does not know nothing about
the user level thread. The kernel-level thread manages user-level threads as if they are single-
threaded processes?examples: Java thread, POSIX threads, etc.

Advantages of User-level threads

1. The user threads can be easily implemented than the kernel thread.
2. User-level threads can be applied to such types of operating systems that do not
support threads at the kernel-level.
3. It is faster and efficient.
4. Context switch time is shorter than the kernel-level threads.
5. It does not require modifications of the operating system.
6. User-level threads representation is very simple. The register, PC, stack, and mini thread
control blocks are stored in the address space of the user-level process.
7. It is simple to create, switch, and synchronize threads without the intervention of the
process.

Disadvantages of User-level threads

1. User-level threads lack coordination between the thread and the kernel.
2. If a thread causes a page fault, the entire process is blocked.

Kernel level thread

The kernel thread recognizes the operating system. There is a thread control block and process
control block in the system for each thread and process in the kernel-level thread. The kernel-
level thread is implemented by the operating system. The kernel knows about all the threads
and manages them. The kernel-level thread offers a system call to create and manage the
threads from user-space. The implementation of kernel threads is more difficult than the user
thread. Context switch time is longer in the kernel thread. If a kernel thread performs a blocking
operation, the Banky thread execution can continue. Example: Window Solaris.
Advantages of Kernel-level threads

1. The kernel-level thread is fully aware of all threads.


2. The scheduler may decide to spend more CPU time in the process of threads being large
numerical.
3. The kernel-level thread is good for those applications that block the frequency.

Disadvantages of Kernel-level threads

1. The kernel thread manages and schedules all threads.


2. The implementation of kernel threads is difficult than the user thread.
3. The kernel-level thread is slower than user-level threads.

Components of Threads

Any thread has the following components.

1. Program counter
2. Register set
3. Stack space

Benefits of Threads

o Enhanced throughput of the system: When the process is split into many threads, and
each thread is treated as a job, the number of jobs done in the unit time increases. That
is why the throughput of the system also increases.
o Effective Utilization of Multiprocessor system: When you have more than one thread
in one process, you can schedule more than one thread in more than one processor.
o Faster context switch: The context switching period between threads is less than the
process context switching. The process context switch means more overhead for the
CPU.
o Responsiveness: When the process is split into several threads, and when a thread
completes its execution, that process can be responded to as soon as possible.
o Communication: Multiple-thread communication is simple because the threads share
the same address space, while in process, we adopt just a few exclusive communication
strategies for communication between two processes.
o Resource sharing: Resources can be shared between all threads within a process, such
as code, data, and files. Note: The stack and register cannot be shared between threads.
There is a stack and register for each thread.

4. What is Deadlock ? what are the necessary condition for Deadlock.

Every process needs some resources to complete its execution. However, the resource is granted in a
sequential order.

1. The process requests for some resource.

2. OS grant the resource if it is available otherwise let the process waits.

3. The process uses it and release on the completion.

A Deadlock is a situation where each of the computer process waits for a resource which is being assigned
to some another process. In this situation, none of the process gets executed since the resource it needs,
is held by some other process which is also waiting for some other resource to be released.

Let us assume that there are three processes P1, P2 and P3. There are three different resources R1, R2 and
R3. R1 is assigned to P1, R2 is assigned to P2 and R3 is assigned to P3.

After some time, P1 demands for R1 which is being used by P2. P1 halts its execution since it can't
complete without R2. P2 also demands for R3 which is being used by P3. P2 also stops its execution
because it can't continue without R3. P3 also demands for R1 which is being used by P1 therefore P3 also
stops its execution.

In this scenario, a cycle is being formed among the three processes. None of the process is progressing
and they are all waiting. The computer becomes unresponsive since all the processes got blocked.

UNIT – 4

1.How data is being stored in computer system?


Computer data storage is a complex subject, but it can be broken down into three basic processes. First,
data is converted to simple numbers that are easy for a computer to store. Second, the numbers are
recorded by hardware inside the computer. Third, the numbers are organized, moved to temporary storage
and manipulated by programs, or software.

Binary Numbers

Every piece of data in a computer is stored as a number. For example, letters are converted to numbers,
and photographs are converted to a large set of numbers that indicate the color and brightness of each
pixel. The numbers are then converted to binary numbers. Conventional numbers use ten digits, from 0-9,
to represent all possible values. Binary numbers use two digits, 0 and 1, to represent all possible values.
The numbers 0 through 8 look like this as binary numbers: 0, 1, 10, 11, 100, 101, 110, 111, 1000. Binary
numbers are very long, but with binary numbers any value can be stored as a series of items which are
true (1) or false (0), such as North/South, Charged/Uncharged, or Light/Dark.

Primary Data Storage

The main data storage in most computers is the hard disk drive. It is a spinning disk or disks with
magnetic coatings and heads that can read or write magnetic information, similar to how cassette tapes
work. In fact, early home computers used cassette tapes for data storage. Binary numbers are recorded as
a series of tiny areas on the disc which are magnetized either north or south. Floppy disks, ZIP drives, and
tapes all use magnetism to record binary numbers. The data on tapes and disks can be destroyed if they
come too close to magnets.

Other Data Storage

Some new laptop computers use solid state drives for primary data storage. These have memory chips,
similar to memory chips in USB keys, SD cards, MP3 players, cell phones and so on. Binary numbers are
recorded by charging or not charging a series of tiny capacitors in the chip. Electronic data storage is
more rugged than magnetic data storage, but after several years the capacitors lose their ability to store
electrical charges.
CDs and DVDs use optics to store binary numbers. As the disk spins, a laser is either reflected or not
reflected by a series of tiny mirrored sections on the disk. Writable disks have a reflective layer that can
be changed by the laser in the computer. Disks are long-lasting, but fragile; scratches on the plastic layer
prevent the laser from correctly reading reflections from the aluminum layer.

Temporary Data Storage

Drives, disks and USB keys are used for long term data storage. Within the computer there are many
areas for short term electronic data storage. Small amounts of data are temporarily stored in a keyboard,
printer and sections of the motherboard and processor. Larger amounts of data are temporarily stored in
the memory chips and video card. Temporary data storage areas are designed to be smaller but faster than
long term storage, and do not retain the data when the computer is turned off.

Organizing Data Storage

Data is stored as lots of binary numbers, by magnetism, electronics or optics. While the computer is
operating, data is also stored in many temporary locations. Software is responsible for organizing, moving
and processing all those numbers. The computer's BIOS contains simple instructions, stored as data in
electronic memory, to move data in and out of different storage locations and around the computer for
processing. The computer's operating system, for example, contains instructions for organizing data into
files and folders, managing temporary data storage, and sending data to application programs and devices
such as printers. Finally, application programs process the data.
2.Distinguish Physical address and logical address
The physical address identifies the physical location of required data in memory. The user never
directly deals with the physical address but can access it by its corresponding logical address.
The user program generates the logical address and thinks it is running in it, but the program
needs physical memory for its execution. Therefore, the logical address must be mapped to the
physical address by MMU before they are used. The Physical Address Space is used for all
physical addresses corresponding to the logical addresses in a logical address space.
A logical address is an address that is generated by the CPU during program execution. The
logical address is a virtual address as it does not exist physically, and therefore, it is also known
as a  Virtual Address. This address is used as a reference to access the physical memory location
by CPU. The term Logical Address Space is used to set all logical addresses generated from a
program's perspective.

A logical address usually ranges from zero to maximum (max). The user program that generates
the logical address assumes that the process runs on locations between 0 and max. This logical
address (generated by CPU) combines with the base address generated by the MMU to form
the physical address. The hardware device called Memory-Management Unit is used for
mapping logical addresses to their corresponding physical address.

3.What is virtual memory ? Advantage & Disadvantage.

Virtual Memory is a storage mechanism which offers user an illusion of having a very big main
memory. It is done by treating a part of secondary memory as the main memory. In Virtual
memory, the user can store processes with a bigger size than the available main memory.

Therefore, instead of loading one long process in the main memory, the OS loads the various
parts of more than one process in the main memory. Virtual memory is mostly implemented with
demand paging and demand segmentation.

In the modern world, virtual memory has become quite common these days. It is used whenever
some pages require to be loaded in the main memory for the execution, and the memory is not
available for those many pages.

So, in that case, instead of preventing pages from entering in the main memory, the OS
searches for the RAM space that are minimum used in the recent times or that are not
referenced into the secondary memory to make the space for the new pages in the main
memory.

Advantages of Virtual Memory


Here, are pros/benefits of using Virtual Memory:

 Virtual memory helps to gain speed when only a particular segment of the program is
required for the execution of the program.
 It is very helpful in implementing a multiprogramming environment.
 It allows you to run more applications at once.
 It helps you to fit many large programs into smaller programs.
 Common data or code may be shared between memory.
 Process may become even larger than all of the physical memory.
 Data / code should be read from disk whenever required.
 The code can be placed anywhere in physical memory without requiring relocation.
 More processes should be maintained in the main memory, which increases the
effective use of CPU.
 Each page is stored on a disk until it is required after that, it will be removed.
 It allows more applications to be run at the same time.
 There is no specific limit on the degree of multiprogramming.
 Large programs should be written, as virtual address space available is more compared
to physical memory.

Disadvantages of Virtual Memory


Here, are drawbacks/cons of using virtual memory:

 Applications may run slower if the system is using virtual memory.


 Likely takes more time to switch between applications.
 Offers lesser hard drive space for your use.
 It reduces system stability.
 It allows larger applications to run in systems that don’t offer enough physical RAM alone
to run them.
 It doesn’t offer the same performance as RAM.
 It negatively affects the overall performance of a system.
 Occupy the storage space, which may be used otherwise for long term data storage.

4.What is internel & external fragmentation ?

"Fragmentation is a process of data storage in which memory space is used inadequately,


decreasing ability or efficiency and sometimes both." The precise implications of fragmentation
depend on the specific storage space allocation scheme in operation and the particular
fragmentation type. In certain instances, fragmentation contributes to "unused" storage
capacity, and the concept also applies to the unusable space generated in that situation. The
memory used to preserve the data set (- for example file format) is similar for other systems (-
for example, the FAT file system), regardless of the amount of fragmentation (from null to the
extreme).

There are three distinct fragmentation kinds: internal fragmentation, external fragmentation, and
data fragmentation that can exist beside or a combination. In preference for enhancements,
inefficiency, or usability, fragmentation is often acknowledged. For other tools, such as
processors, similar things happen.

Describe it with example.

The Fundamental concept of Fragmentation

When a computer program demands fragments of storage from the operating system (OS), the
elements are assigned in chunks. When a chunk of the software program is completed, it can be
released back to the system, making it ready to be transferred to the next or the similar program
again afterward. Software differs in the size and duration of time a chunk is kept by it. A
computer program can demand and release several blocks of storage throughout its lifetime.
The unused memory sections are large and continuous when a system is initiated. The large
continuous sectors become fragmented into a smaller part of the regions through time and
utilization. Ultimately, accessing large contiguous blocks of storage can become difficult for the
system.

Internal Fragmentation

Most memory space is often reserved than is required to adhere to the restrictions regulating
storage space. For instance, memory can only be supplied in blocks (multiple of 4) to systems,
and as an outcome, if a program demands maybe 29 bytes, it will get a coalition of 32 bytes. The
surplus storage goes to waste when this occurs. The useless space is found inside an assigned
area in this case. This structure, called fixed segments, struggles from excessive memory-any
process consumes an enormous chunk, no matter how insignificant. Internal fragmentation is
what this garbage is termed. Unlike many other forms of fragmentation, it is impossible to
restore inner fragmentation, typically, the only way to eliminate it is with a new design.

For instance, in dynamic storage allocation, storage reservoirs reduce internal fragmentation
significantly by extending the space overhead over a more significant number of elements.

The figure mentioned above demonstrates internal fragmentation because internal


fragmentation is considered the distinction between the assigned storage space and the needed
space or memory.

External Fragmentation

When used storage is differentiated into smaller lots and is punctuated by assigned memory
space, external fragmentation occurs. It is a weak point of many storage allocation
methodologies when they cannot effectively schedule memory used by systems. The
consequence is that, while unused storage is available, it is essentially inaccessible since it is
separately split into fragments that are too limited to meet the software's requirements. The
word "external" derives from the fact that the inaccessible space is stored outside the assigned
regions.

Consider, for instance, a scenario in which a system assigns three consecutive memory blocks
and then relieves the middle block. The memory allocator can use this unused allocation of the
storage for future assignments. Fortunately, if the storage to be reserved is more generous in
size than this available region, it will not use this component.

In data files, external fragmentation often exists when several files of various sizes are formed,
resized, and discarded. If a broken document into several small chunks is removed, the impact is
much worse since this retains equally small free space sections.

5.What is fixed partitioning and dynamic partitioning ? Describe it with diagram.

The earliest and one of the simplest technique which can be used to load more than one
processes into the main memory is Fixed partitioning or Contiguous memory allocation.

In this technique, the main memory is divided into partitions of equal or different sizes. The
operating system always resides in the first partition while the other partitions can be used to
store user processes. The memory is assigned to the processes in contiguous way.

In fixed partitioning,

1. The partitions cannot overlap.


2. A process must be contiguously present in a partition for the execution.

There are various cons of using this technique.

1. Internal Fragmentation

If the size of the process is lesser then the total size of the partition then some size of the
partition get wasted and remain unused. This is wastage of the memory and called internal
fragmentation.

As shown in the image below, the 4 MB partition is used to load only 3 MB process and the
remaining 1 MB got wasted.

2. External Fragmentation

The total unused space of various partitions cannot be used to load the processes even though
there is space available but not in the contiguous form.
As shown in the image below, the remaining 1 MB space of each partition cannot be used as a
unit to store a 4 MB process. Despite of the fact that the sufficient space is available to load the
process, process will not be loaded.

6.write a short note on paging ?

In Operating Systems, Paging is a storage mechanism used to retrieve processes from the secondary
storage into the main memory in the form of pages.

The main idea behind the paging is to divide each process in the form of pages. The main memory will
also be divided in the form of frames.

One page of the process is to be stored in one of the frames of the memory. The pages can be stored at
the different locations of the memory but the priority is always to find the contiguous frames or holes.

Pages of the process are brought into the main memory only when they are required otherwise they
reside in the secondary storage.

Different operating system defines different frame sizes. The sizes of each frame must be equal.
Considering the fact that the pages are mapped to the frames in Paging, page size needs to be as same as
frame size.

Example

Let us consider the main memory size 16 Kb and Frame size is 1 KB therefore the main memory will be
divided into the collection of 16 frames of 1 KB each.

There are 4 processes in the system that is P1, P2, P3 and P4 of 4 KB each. Each process is divided into
pages of 1 KB each so that one page can be stored in one frame.

Initially, all the frames are empty therefore pages of the processes will get stored in the contiguous way.

Frames, pages and the mapping between the two is shown in the image below.
Let us consider that, P2 and P4 are moved to waiting state after some time. Now, 8 frames become empty
and therefore other pages can be loaded in that empty place. The process P5 of size 8 KB (8 pages) is
waiting inside the ready queue.

Given the fact that, we have 8 non contiguous frames available in the memory and paging provides the
flexibility of storing the process at the different places. Therefore, we can load the pages of process P5 in
the place of P2 and P4.

UNIT -5

1.Write the attributes of the file system?

A file can be defined as a data structure which stores the sequence of records. Files are stored in
a file system, which may exist on a disk or in the main memory. Files can be simple (plain text) or
complex (specially-formatted).

The collection of files is known as Directory. The collection of directories at the different levels, is
known as File System.
Attributes of the File

1.Name

Every file carries a name by which the file is recognized in the file system. One directory cannot
have two files with the same name.

2.Identifier

Along with the name, Each File has its own extension which identifies the type of the file. For
example, a text file has the extension .txt, A video file can have the extension .mp4.

3.Type

In a File System, the Files are classified in different types such as video files, audio files, text files,
executable files, etc.

4.Location

In the File System, there are several locations on which, the files can be stored. Each file carries
its location as its attribute.

5.Size

The Size of the File is one of its most important attribute. By size of the file, we mean the
number of bytes acquired by the file in the memory.

6.Protection

The Admin of the computer may want the different protections for the different files. Therefore
each file carries its own set of permissions to the different group of Users.

7.Time and Date

Every file carries a time stamp which contains the time and date on which the file is last
modified.

3.What are the operation of file ? Describe all.

A file is a collection of logically related data that is recorded on the secondary storage in the
form of sequence of operations. The content of the files are defined by its creator who is
creating the file. The various operations which can be implemented on a file such as read, write,
open and close etc. are called file operations. These operations are performed by the user by
using the commands provided by the operating system. Some common operations are as
follows:

Types of file operation:

1.Create operation:

This operation is used to create a file in the file system. It is the most widely used operation
performed on the file system. To create a new file of a particular type the associated application
program calls the file system. This file system allocates space to the file. As the file system knows
the format of directory structure, so entry of this new file is made into the appropriate directory.

2. Open operation:

This operation is the common operation performed on the file. Once the file is created, it must
be opened before performing the file processing operations. When the user wants to open a file,
it provides a file name to open the particular file in the file system. It tells the operating system
to invoke the open system call and passes the file name to the file system.

3. Write operation:

This operation is used to write the information into a file. A system call write is issued that
specifies the name of the file and the length of the data has to be written to the file. Whenever
the file length is increased by specified value and the file pointer is repositioned after the last
byte written.

4. Read operation:

This operation reads the contents from a file. A Read pointer is maintained by the OS, pointing
to the position up to which the data has been read.

5. Re-position or Seek operation:

The seek system call re-positions the file pointers from the current position to a specific place in
the file i.e. forward or backward depending upon the user's requirement. This operation is
generally performed with those file management systems that support direct access files.

6. Delete operation:

Deleting the file will not only delete all the data stored inside the file it is also used so that disk
space occupied by it is freed. In order to delete the specified file the directory is searched. When
the directory entry is located, all the associated file space and the directory entry is released.

7. Truncate operation:
Truncating is simply deleting the file except deleting attributes. The file is not completely
deleted although the information stored inside the file gets replaced.

8. Close operation:

When the processing of the file is complete, it should be closed so that all the changes made
permanent and all the resources occupied should be released. On closing it deallocates all the
internal descriptors that were created when the file was opened.

9. Append operation:

This operation adds data to the end of the file.

10. Rename operation:

This operation is used to rename the existing file.

3. Write short notes on the following:

(i)Sequential Access

Most of the operating systems access the file sequentially. In other words, we can say that most of the
files need to be accessed sequentially by the operating system.

In sequential access, the OS read the file word by word. A pointer is maintained which initially points to
the base address of the file. If the user wants to read first word of the file then the pointer provides that
word to the user and increases its value by 1 word. This process continues till the end of the file.

Modern word systems do provide the concept of direct access and indexed access but the most used
method is sequential access due to the fact that most of the files such as text files, audio files, video files,
etc need to be sequentially accessed.

(ii)Direct Access
The Direct Access is mostly required in the case of database systems. In most of the cases, we need
filtered information from the database. The sequential access can be very slow and inefficient in such
cases.

Suppose every block of the storage stores 4 records and we know that the record we needed is stored in
10th block. In that case, the sequential access will not be implemented because it will traverse all the
blocks in order to access the needed record.

Direct access will give the required result despite of the fact that the operating system has to perform
some complex tasks such as determining the desired block number. However, that is generally
implemented in database applications.

(iii)Indexed Access

If a file can be sorted on any of the filed then an index can be assigned to a group of certain records.
However, A particular record can be accessed by its index. The index is nothing but the address of a
record in the file.

In index accessing, searching in a large database became very quick and easy but we need to have some
extra space in the memory to store the index value.

You might also like