You are on page 1of 20

Module 1 Operating System Overview

Course Outcome 1: Understand the basic concepts, evolution, and


structure of OS.

An OS is a program that controls the execution of application programs and acts as an


interface between applications and the computer hardware.

1.1 OPERATING SYSTEM OBJECTIVES AND FUNCTIONS

Three objectives

• Convenience: An OS makes a computer more convenient to use.


• Efficiency: An OS allows the computer system resources to be used in an efficient manner.
• Ability to evolve: An OS should be constructed in such a way as to permit the effective
development, testing, and introduction of new system functions without interfering with
service.

The Operating System as a User/Computer Interface


• The hardware and software used in providing applications to a user can be viewed in a
layered or hierarchical fashion, as depicted in Figure 2.1.
• The user of applications is not concerned with the details of computer hardware.
Thus, the end user views a computer system in terms of a set of applications.
• An application can be expressed in a programming language and is developed by an
application programmer. It is a complex task to develop an application program as a
set of machine instructions that is completely responsible for controlling the computer
hardware.
• To ease this, a set of system programs is provided. Some of these programs are
referred to as utilities. These implement frequently used functions that assist in
program creation, the management of files, and the control of I/O devices.
• A programmer will make use of these facilities in developing an application, and the
application, while it is running, will invoke the utilities to perform certain functions.
• The OS masks the details of the hardware from the programmer and provides the
programmer with a convenient interface for using the system. It acts as mediator,
making it easier for the programmer and for application programs to access and use
those facilities and services.
Services/Functions Provided by OS

• Program development: The OS provides a variety of facilities and services, such as


editors and debuggers, to assist the programmer in creating programs.
These services are in the form of utility programs that, while not strictly part of the core of
the OS, are supplied with the OS and are referred to as application program development
tools.

• Program execution: A number of steps need to be performed to execute a program.


Instructions and data must be loaded into main memory, I/O devices and files must be
initialized, and other resources must be prepared. The OS handles these scheduling duties for
the user.

• Access to I/O devices: Each I/O device requires its own peculiar set of instructions or
control signals for operation. The OS provides a uniform interface that hides these details so
that programmers can access such devices using simple reads and writes.

• Controlled access to files: For file access, the OS must reflect a detailed understanding of
not only the nature of the I/O device (disk drive, tape drive) but also the structure of the data
contained in the files on the storage medium. In the case of a system with multiple users, the
OS may provide protection mechanisms to control access to the files.

• System access: For shared or public systems, the OS controls access to the system as a
whole and to specific system resources. The access function must provide protection of
resources and data from unauthorized users and must resolve conflicts for resource
contention.

• Error detection and response: A variety of errors can occur while a computer system is
running. These include internal and external hardware errors, such as a memory error, or a
device failure or malfunction; and various software errors, such as division by zero, attempt
to access forbidden memory location, and inability of the OS to grant the request of an
application. In each case, the OS must provide a response that clears the error condition with
the least impact on running applications. The response may range from ending the program
that caused the error, to retrying the operation, to simply reporting the error to the application.

• Accounting: A good OS will collect usage statistics for various resources and monitor
performance parameters such as response time. On any system, this information is useful in
anticipating the need for future enhancements and in tuning the system to improve
performance. On a multiuser system, the information can be used for billing purposes.
1.2 The Evolution of Operating Systems

Serial Processing
• With the earliest computers, from the late 1940s to the mid-1950s, the programmer
interacted directly with the computer hardware; there was no OS.
• These computers were run from a console consisting of display lights, toggle
switches, some form of input device, and a printer.
• Programs in machine code were loaded via the input device (e.g., a card reader).
• If an error halted the program, the error condition was indicated by the lights. If the
program proceeded to a normal completion, the output appeared on the printer.
These early systems presented two main problems:
• Scheduling: Most installations used a hardcopy sign-up sheet to reserve computer time.
Typically, a user could sign up for a block of time in multiples of a half hour or so. A user
might sign up for an hour and finish in 45 minutes; this would result in wasted computer
processing time. On the other hand, the user might run into problems, not finish in the
allotted time, and be forced to stop before resolving the problem.
• Setup time: A single program, called a job, could involve loading the compiler plus the
high-level language program (source program) into memory, saving the compiled program
(object program) and then loading and linking together the object program and common
functions. Each of these steps could involve mounting
or dismounting tapes or setting up card decks. If an error occurred, the hapless user typically
had to go back to the beginning of the setup sequence.Thus, a considerable amount of time
was spent just in setting up the program to run. This mode of operation could be termed serial
processing, reflecting the fact
that users have access to the computer in series.
Simple Batch Systems
• Early computers were very expensive, and therefore it was important to maximize
processor utilization.The wasted time due to scheduling and setup time was
unacceptable.
• To improve utilization, the concept of a batch operating system was developed.
• The central idea behind the simple batch-processing scheme is the use of a piece of
software known as the monitor.With this type of OS, the user no longer hasdirect
access to the processor. Instead, the user submits the job on cards or tape to a
computer operator, who batches the jobs together sequentially and places the entire
batch on an input device, for use by the monitor.
• Each program is constructed to branch back to the monitor when it completes
processing, at which point the monitor automatically begins loading the next program.
• Monitor point of view: The monitor controls the sequence of events. For this to be so,
much of the monitor must always be in main memory and available for execution (Figure
2.3).That portion is referred to as the resident monitor.
• The rest of the monitor consists of utilities and common functions that are loaded as
subroutines to the user program at the beginning of any job that requires them. The
monitor reads in jobs one at a time from the input device(typically a card reader or
magnetic tape drive).
• As it is read in, the current job is placed in the user program area, and control is
passed to this job.
• When the job is completed, it returns control to the monitor, which immediately reads
in the next job. The results of each job are sent to an output device, such as a printer,
for delivery to the user.
• Processor point of view:
• At a certain point, the processor is executing instructions from the portion of main
memory containing the monitor.
• These instructions cause the next job to be read into another portion of main memory.
• Once a job has been read in, the processor will encounter a branch instruction in the
monitor that instructs the processor to continue execution at the start of the user
program.
• The processor will then execute the instructions in the user program until it
encounters an ending or error condition.
• Either event causes the processor to fetch its next instruction from the monitor
program.
• Thus the phrase “control is passed to a job” simply means that the processor is now
fetching and executing instructions in a user program, and “control is returned to the
monitor” means that the processor is now fetching and executing instructions from
the monitor program.
Desirable Hardware features:
• Memory protection: While the user program is executing, it must not alter the memory
area containing the monitor. If such an attempt is made, the processor hardware should detect
an error and transfer control to the monitor.The monitor would then abort the job, print out an
error message, and load in the next job.
• Timer: A timer is used to prevent a single job from monopolizing the system. The timer is
set at the beginning of each job. If the timer expires, the user program is stopped, and control
returns to the monitor.
• Privileged instructions: Certain machine level instructions are designated privileged and
can be executed only by the monitor. If the processor encounters such an instruction while
executing a user program, an error occurs causing control to be transferred to the monitor.
• Interrupts: Early computer models did not have this capability. This feature gives the OS
more flexibility in relinquishing control to and regaining control from user programs.

User mode/ Kernel mode


Considerations of memory protection and privileged instructions lead to the concept of modes
of operation. A user program executes in a user mode, in which certain areas of memory are
protected from the user’s use and in which certain instructions may not be executed.
The monitor executes in a system mode, or what has come to be called kernel mode, in
which privileged instructions may be executed and in which protected areas of memory may
be accessed.

Multi-programmed Batch Systems


• Even with the automatic job sequencing provided by a simple batch operating
system,the processor is often idle.
• The problem is that I/O devices are slow compared to the processor.
• Figure 2.5a illustrates this situation, where we have a single program, referred to as
uni-programming.

• The processor spends a certain amount of time executing, until it reaches an I/O
instruction. It must then wait until that I/O instruction concludes before proceeding.
• This inefficiency is not necessary.We know that there must be enough memory to
hold the OS (resident monitor) and one user program.
• Suppose that there is room for the OS and two user programs.When one job needs to
wait for I/O, the processor can switch to the other job, which is likely not waiting for
I/O (Figure 2.5b).
• Furthermore, we might expand memory to hold three, four, or more programs and
switch among all of them (Figure 2.5c). The approach is known as
multiprogramming, or multitasking. It is the central theme of modern operating
systems.

Time-Sharing Systems
• With the use of multiprogramming, batch processing can be quite efficient. However,
for many jobs, it is desirable to provide a mode in which the user interacts directly
with the computer.
• Indeed, for some jobs, such as transaction processing, an interactive mode is essential.
• Multiprogramming can also be used to handle multiple interactive jobs. This
technique is referred to as time sharing, because processor time is shared among
multiple users.
• In a time-sharing system, multiple users simultaneously access the system through
terminals, with the OS interleaving the execution of each user program in a short burst
or quantum of computation.
• Thus, if there are n users actively requesting service at one time, each user will only
see on the average 1/n of the effective computer capacity, not counting OS overhead.

1.3 OS Design Considerations for Multiprocessor architectures,

In a SMP system, the kernel can execute on any processor, and typically each processor does
self-scheduling from the pool of available processes or threads.

The kernel can be constructed as multiple processes or multiple threads, allowing portions of
the kernel to execute in parallel.

The SMP approach complicates the OS. The OS designer must deal with the complexity due
to sharing resources (like data structures) and coordinating actions (like accessing devices)
from multiple parts of the OS executing at the same time.

Techniques must be employed to resolve and synchronize claims to resources.

The key design issues include the following:

• Simultaneous concurrent processes or threads: Kernel routines need to be reentrant to


allow several processors to execute the same kernel code simultaneously. With multiple
processors executing the same or different parts of the kernel, kernel tables and management
structures must be managed properly to avoid data corruption or invalid operations.

• Scheduling: Any processor may perform scheduling, which complicates the task of
enforcing a scheduling policy and assuring that corruption of the scheduler data structures is
avoided. If kernel-level multithreading is used, then the opportunity exists to schedule
multiple threads from the same process simultaneously on multiple processors.
• Synchronization: With multiple active processes having potential access to shared
address spaces or shared I/O resources, care must be taken to provide effective
synchronization. Synchronization is a facility that enforces mutual exclusion and event
ordering. A common synchronization mechanism used in multiprocessor operating systems is
locks.

• Memory management: Memory management on a multiprocessor must deal with all of


the issues found on uniprocessor computers. The OS needs to exploit the available hardware
parallelism to achieve the best performance. The paging mechanisms on different processors
must be coordinated to enforce consistency when several processors share a page or segment
and to decide on page replacement.

Reliability and fault tolerance: The OS should provide graceful degradation in the face
of processor failure. The scheduler and other portions of the OS must recognize the loss of a
processor and restructure management tables accordingly.

OS Design Considerations for Multicore architectures,

Current multicore vendors offer systems with up to eight cores on a single chip. With each
succeeding processor technology generation, the number of cores and the amount of shared
and dedicated cache memory increases, so that we are now entering the era of “many-core”
systems.

The design challenge for a many-core multicore system is to efficiently harness the multicore
processing power and intelligently manage the substantial on-chip resources efficiently.

1. A central concern is how to match the inherent parallelism of a many-core system


with the performance requirements of applications. The potential for parallelism in
fact exists at three levels in contemporary multicore system.
2. First, there is hardware parallelism within each core processor, known as instruction
level parallelism, which may or may not be exploited by application programmers and
compilers.
3. Second, there is the potential for multiprogramming and multithreaded execution
within each processor.
4. Finally, there is the potential for a single application to execute in concurrent
processes or threads across multiple cores.

1.4 Operating system structures,

A system as large and complex as a modern operating system must be engineered carefully if
it is to function properly and be modified easily. A common approach is to partition the task
into small components, or modules, rather than have one monolithic system.

Simple Structure
Many operating systems do not have well-defined structures. Frequently, such systems
started as small, simple, and limited systems and then grew beyond their original scope.
• MS-DOS is an example of such a system. It was written to provide the most
functionality in the least space, so it was not carefully divided into modules. Figure
2.11 shows its structure.

• In MS-DOS, the interfaces and levels of functionality are not well separated. For
instance, application programs are able to access the basic I/O routines to write
directly to the display and disk drives. Such freedom leaves MS-DOS vulnerable to
errant (or malicious) programs, causing entire system crashes when user programs
fail.
• Of course, MS-DOS was also limited by the hardware of its era. Because the Intel
8088 for which it was written provides no dual mode and no hardware protection, the
designers of MS-DOS had no choice but to leave the base hardware accessible.
➢ Another example of limited structuring is the original UNIX operating system.
• It consists of two separable parts: the kernel and the system programs. The kernel

is further separated into a series of interfaces and device drivers, which have been added and
expanded over the years as UNIX has evolved.

• Everything below the system-call interface and above the physical hardware is the
kernel. The kernel provides the file system, CPU scheduling, memory management,
and other operating-system functions through system calls. An enormous amount of
functionality to be combined into one level.
• This monolithic structure was difficult to implement and maintain.
• It had a distinct performance advantage, however: there is very little overhead in the
system call interface or in communication within the kernel.

Layered Approach

• With proper hardware support, operating systems can be broken into pieces that are
smaller and more appropriate than those allowed by the original MS-DOS and UNIX
systems.
• The operating system can then retain much greater control over the computer and over
the applications that make use of that computer.
• Implementers have more freedom in changing the inner workings of the system and in
creating modular operating systems.
• Under a topdown approach, the overall functionality and features are determined and
are separated into components.
• Information hiding is also important, because it leaves programmers free to
implement the low-level routines as they see fit, provided that the external interface of
the routine stays unchanged and that the routine itself performs the advertised task.
• A system can be made modular in many ways. O0ne method is the layered
approach, in which the operating system is broken into / a nu00m0ber of layers
(levels). The bottom layer (layer 0) is the hardware; the highest (layer N) is the user
interface. This layering structure is depicted in Figure 2.13.

An operating-system layer is an implementation of an abstract object made up of data and the


operations that can manipulate those data. A typical operating-system layer—say, layer M—
consists of data structures and a set of routines that can be invoked by higher-level layers.
Layer M, in turn, can invoke operations on lower-level layers.
Main advantage of the layered approach
1. Simplicity of construction and debugging. The layers are selected so that each uses
functions (operations) and services of only lower-level layers. This approach
simplifies debugging and system verification.
2. Design and implementation of the system are simplified.
Each layer is implemented only with operations provided by lower-level layers. A
layer does not need to know how these operations are implemented; it needs to know
only what these operations do. Hence, each layer hides the existence of certain data
structures, operations, and hardware from higher-level layers.

Major difficulty with the layered approach :


1. Appropriately defining the various layers. Because a layer can use only lower-level
layers, careful planning is necessary. For example, the device driver for the backing store
(disk space used by virtual-memory algorithms) must be at a lower level than the memory-
management routines, because memory management requires the ability to use the backing
store. Other requirements may not be so obvious.
2. Less efficient than other types. For instance, when a user program executes an I/O
operation, it executes a system call that is trapped to the I/O layer, which calls the memory-
management layer, which in turn calls the CPU-scheduling layer,which is then passed to the
hardware. At each layer, the parameters may be modified, data may need to be passed, and so
on. Each layer adds overhead to the system call. The net result is a system call that takes
longer than does one on a nonlayered system.

Microkernels
• As UNIX expanded, the kernel became large and difficult to manage. In the mid-
1980s, researchers at Carnegie Mellon University developed an operating system
called Mach that modularized the kernel using the microkernel approach.
• This method structures the operating system by removing all nonessential components
from the kernel and implementing them as system and user-level programs. The result
is a smaller kernel.
• There is little consensus regarding which services should remain in the kernel and
which should be implemented in user space.
• Typically, however, microkernels provide minimal process and memory management,
in addition to a communication facility. Figure 2.14 illustrates the architecture of a
typical microkernel.
• The main function of the microkernel is to provide communication between the
client program and the various services that are also running in user space.
• Communication is provided through message passing For example, if the client
program wishes to access a file, it

must interact with the file server. The client program and service never interact directly.
Rather, they communicate indirectly by exchanging messages with the microkernel.
Benefit of the microkernel approach :
1. It makes extending the operating system easier. All new services are added to user
space and consequently do not require modification of the kernel.
2. When the kernel does have to be modified, the changes tend to be fewer, because the
microkernel is a smaller kernel. The resulting operating system is easier to port from
one hardware design to another.
3. The microkernel also provides more security and reliability, since most services are
running as user—rather than kernel— processes. If a service fails, the rest of the
operating system remains untouched.
Examples: The Mac OS X
QNX, a real-time operating system for embedded systems.
Drawback:
• Unfortunately, the performance of microkernels can suffer due to increased system-
function overhead.
Modules
• This methodology for operating-system design involves using loadable kernel
modules. The kernel has a set of core components and links in additional services via
modules, either at boot time or during run time.
• This type of design is common in modern implementations of UNIX, such as Solaris,
Linux, and Mac OS X, as well as Windows.
• The idea of the design is for the kernel to provide core services while other services
are implemented dynamically, as the kernel is running. Linking services dynamically
is preferable to adding new features directly to the kernel, which would require
recompiling the kernel every time a change was made.
• Thus, for example, we might build CPU scheduling and memory management
algorithms directly into the kernel and then add support for different file systems by
way of loadable modules.
• The overall result resembles a layered system in that each kernel section has defined,
protected interfaces; but it is more flexible than a layered system, because any module
can call any other module.
• The approach is also similar to the microkernel approach in that the primary module
has only core functions and knowledge of how to load and communicate with other
modules; but it

is more efficient because modules do not need to invoke message passing to communicate.

The Solaris operating system structure, shown in Figure 2.15, is organized


around a core kernel with seven types of loadable kernel modules:
1. Scheduling classes
2. File systems
3. Loadable system calls
4. Executable formats
5. STREAMS modules
6. Miscellaneous
7. Device and bus drivers
Linux also uses loadable kernel modules, primarily for supporting device drivers and file
systems

Hybrid Systems
• In practice, very few operating systems adopt a single, strictly defined structure.
Instead, they combine different structures, resulting in hybrid systems that address
performance, security, and usability issues.
• For example, both Linux and Solaris are monolithic, because having the operating
system in a single address space provides very efficient performance. However, they
are also modular, so that new functionality can be dynamically added to the kernel.
Windows is largely monolithic as well (again primarily for performance reasons), but
it retains some behavior typical of microkernel systems, including providing support
for separate subsystems (known asoperating-system personalities) that run as user-
mode processes. Windows systems also provide support for dynamically loadable
kernel modules.

System Calls

• System calls provide an interface to the services made available by an operating


system.
• These calls are generally available as routines written in C and C++, although certain
low-level tasks (for example, tasks where hardware must be accessed directly) may
have to be written using assembly-language instructions.
how system calls are used:
• writing a simple program to read data from one file and copy them to another file.
The relationship between an API, the system-call interface, and the operating system
An application programmer prefer programming according to an API rather than invoking
actual system calls actual system calls can often be more detailed and difficult to work with
than the API available to an application programmer.
For most programming languages, the run-time support system (a set of functions built into
libraries included with a compiler) provides a system call interface that serves as the link to
system calls made available by the operating system. The system-call interface intercepts
function calls in the API and invokes the necessary system calls within the operating system.
Typically, a number is associated with each system call, and the system-call interface
maintains a table indexed according to these numbers. The system call interface then invokes
the intended system call in the operating-system kernel and returns the status of the system
call and any return values.
The caller need know nothing about how the system call is implemented or what it does
during execution. Rather, the caller need only obey the API and understand what the
operating system will do as a result of the execution of that system call. Thus, most of the
details of the operating-system interface are hidden from the programmer by the API and are
managed by the run-time
support library. The relationship between an API, the system-call interface, and the operating
system is shown in Figure 2.6, which illustrates how the operating system handles a user
application invoking the open() system call.
• System calls occur in different ways, depending on the computer in use. Often, more
information is required than simply the identity of the desired system call. The exact
type and amount of information vary according to the particular operating system and
call.
System calls can be grouped roughly into six major categories: process
control, file manipulation, device manipulation, information maintenance,
communications, and protection.
Some examples of System Calls

Open() A program initializes access to a file in a


file system using the open system call.
Read() A program that needs to access data
from a file stored in a file system uses the
read system call.
Write() It writes data from a buffer declared by
the user to a given device, maybe a file.
This is primary way to output data from a
program by directly using a system call.
Exec() exec is a functionality of an operating
system that runs an executable file in the
context of an already existing process,
replacing the previous executable
Fork() fork is an operation whereby a process
creates a copy of itself. Fork is the
primary method of process creation on
Unix-like operating systems.

Linux Shell.
• Although Linux systems have a graphical user interface, most programmers and
sophisticated users still prefer a command-line interface, called the shell.
• The shell command-line interface is much faster to use.
The bash shell (bash)
• It is heavily based on the original UNIX shell, Bourne shell (written by Steve
Bourne, then at Bell Labs). Its name is an acronym for Bourne Again SHell. Many
other shells are also in use (ksh, csh, etc.), but bash is the default shell in most Linux
systems.
• When the shell starts up, it initializes itself, then types a prompt character, often a
percent or dollar sign, on the screen and waits for the user to type a command line.
• When the user types a command line, the shell extracts the first word from it, where
word here means a run of characters delimited by a space or tab.
• It then assumes this word is the name of a program to be run, searches for this
program, and if it finds it, runs the program.
• The shell then suspends itself until the program terminates,at which time it tries to
read the next command.
• The shell is an ordinary user program. All it needs is the ability to read from the
keyboard and write to the monitor and the power to execute other programs.
• Commands may take arguments, which are passed to the called program as character
strings. For example, the command line
1. cp src dest
invokes the cp program with two arguments, src and dest. This program interprets
the first one to be the name of an existing file. It makes a copy of this file and calls
the copy dest.
2. Not all arguments are file names.
e.g head –20 file
the first argument, –20, tells head to print the first 20 lines of file, instead of the
default number of lines, 10.
• Arguments that control the operation of a command or specify an optional value are
called flags, and by convention are indicated with a dash. The dash is required to
avoid ambiguity, because the command
head 20 file
is perfectly legal, and tells head to first print the initial 10 lines of a file called 20, and
then print the initial 10 lines of a second file called file. Most Linux commands accept
multiple flags and arguments.
• To make it easy to specify multiple file names, the shell accepts magic characters,
sometimes called wild cards. An asterisk, for example, matches all possible strings,
so
ls *.c
tells ls to list all the files whose name ends in .c.
• A program like the shell does not have to open the terminal (keyboard and monitor) in
order to read from it or write to it. Instead, when it (or any other program)starts up, it
automatically has access to a file called standard input (for reading), a file called
standard output (for writing normal output), and a file called standard error (for
writing error messages).
• A program that reads its input from standard input, does some processing on it, and
writes its output to standard output is called a filter.
• It is possible to put a list of shell commands in a file and then start a shell with this
file as standard input. The (second) shell just processes them in order, the same as it
would with commands typed on the keyboard.
• Files containing shell commands are called shell scripts. Shell scripts may assign
values to shell variables and then read them later.
chmod
• In Linux, access to the files is managed through the file permissions, attributes, and
ownership. This ensures that only authorized users and processes can access files and
directories.
• The use of chmod command is to change the access permissions of files and
directories.
The name is an abbreviation of change mode.
• Syntax :
• chmod [reference][operator][mode] file...
• The references are used to distinguish the users to whom the permissions apply i.e.
they are list of letters that specifies whom to give permissions. permissions defines the
permissions for the owner of the file (the "user"), members of the group who owns the
file (the "group"), and anyone else ("others"). There are two ways to represent these
permissions: with symbols (alphanumeric characters), or with octal numbers (the
digits 0 through 7).The references are represented by one or more of the following
letters:
Reference Class Description

u owner file's owner

g group users who are members of the file's group

o Other users who are neither the file's owner nor


members of the file's group
a all All three of the above

Numeric mode
A numeric mode is from one to four octal digits (0-7), derived by adding up the bits with
values 4, 2, and 1. Any omitted digits are assumed to be leading zeros.
EXAMPLES
Read by owner only
$ chmod 400 sample.txt
Read by group only
$ chmod 040 sample.txt
Read by anyone
$ chmod 004 sample.txt
Write by owner only
$ chmod 200 sample.txt
Symbolic mode
The format of a symbolic mode is '[ugoa...][[+-=][rwxXstugo...]...][,...]'. Multiple
symbolic operations can be given, separated by commas.
EXAMPLES
Deny execute permission to everyone.
$ chmod a-x sample.txt
Allow read permission to everyone.
$ chmod a+r sample.txt
Make a file readable and writable by the group and others.
$ chmod go+rw sample.txt

Linux Kernel

System calls: The system call is the means by which a process requests a specific kernel
service. There are several hundred system calls, which can be roughly grouped into six
categories: filesystem, process, scheduling, interprocess communication, socket (networking),
and miscellaneous. All system
calls come here, causing a trap which switches the execution from user mode into
protected kernel mode and passes control to one of the kernel components.
Interrupts and Dispatcher:
The kernel sits directly on the hardware and enables interactions with I/O devices and the
memory management unit and controls CPU access to them.
• Interrupt handlers, are the primary way for interacting with devices, and the low-level
dispatching mechanism.
• This dispatching occurs when an interrupt happens. The low-level code here stops the
running process, saves its state in the kernel process structures, and starts the
appropriate driver.
• Process dispatching also happens when the kernel completes some operations, and it
is time to start up a user process again. The dispatching code is in assembler and is
quite distinct from scheduling.

Kernel subsystems into three main components


I/O Component:
• Virtual memory: Allocates and manages virtual memory for processes.
• File systems: Provides a global, hierarchical namespace for files, directories, and other file
related objects and provides file system functions.
• Network protocols: Supports the Sockets interface to users for the TCP/IP protocol suite.
• Character device drivers: Manages devices that require the kernel to send or receive data
one byte at a time, such as terminals, modems, and printers.
• Block device drivers: Manages devices that read and write data in blocks, such as various
forms of secondary memory (magnetic disks, CD-ROMs, etc.).
• Network device drivers: Manages network interface cards and communications ports that
connect to network devices, such as bridges and routers.
• Traps and faults: Handles traps and faults generated by the processor, such as a memory
fault.

• To the right in Fig. 10-3 are the other two key components of the Linux kernel.
These are responsible for the memory and process management tasks.
• Memory management tasks include maintaining the virtual to physical-memory
mappings, maintaining a cache of recently accessed pages and implementing a good
page-replacement policy, and on-demand bringing in new pages of needed code and
data into memory.
• The key responsibility of the process-management component is the creation and
termination of processes. It also includes the process scheduler, which chooses which
process or, rather, thread to run next.
• Code for signal handling also belongs to this component.
• While the three components are represented separately in the figure, they are highly
interdependent.
University Questions

DECEMBER 18
Sr.No Question Marks
1 Explain the difference between monolithic kernel and microkernel 5
2 What is operating system? Explain various functions and objectives 10
3 What is system call? Explain any 5 system call in details. 10
25
DECEMBER 19
1 Discuss Operating System as a Resource Manager 5
2 Describe Microkernel with a diagram 5
10
MAY 18
1 Explain the difference between monolithic kernel and microkernel 5
2 What is operating system? Explain various functions and objectives 10
3 What is system call? Explain any 5 system call in details. 10
25
MAY 19
1 Define Operating System. Brief the Functions of OS 5
2 Explain Shell. Explain use of chmod command in linux 5
3 Differentiate between monolithic, layered and microkernel structure of OS 10
4 Write short notes on: System Calls 10
30

References

1. William Stallings, Operating System: Internals and Design Principles, Prentice Hall, 8th
Edition, 2014, ISBN-10: 0133805913 • ISBN-13: 9780133805918 .
2. Abraham Silberschatz, Peter Baer Galvin and Greg Gagne, Operating System Concepts,
John Wiley & Sons , Inc., 9th Edition, 2016, ISBN 978-81-265-5427-0
3. Andrew Tannenbaum, Operating System Design and Implementation, Pearson, 3rd
Edition.

You might also like