Os Lec 3

You might also like

You are on page 1of 10

OPERATING SYSTEM STRUCTURE

Now that we have seen what operating systems look like on the outside (i.e., the programmer's
interface), it is time to take a look inside. In the following sections, we will examine six different
structures that have been tried, in order to get some idea of the spectrum of possibilities. These
are by no means exhaustive, but they give an idea of some designs that have been tried in
practice. The six designs are monolithic systems, layered systems, microkernels, client-server
systems, virtual machines, and exokernels.

Layered Systems
A generalization of the approach is to organize the operating system as a hierarchy of layers,
each one constructed upon the one below it. The first system constructed in this way was the
THE system built at the Technische Hogeschool Eindhoven in the Netherlands by E. W. Dijkstra
(1968) and his students.

The system was a simple batch system for a Dutch computer, the Electrologica X8, which had
32K of 27-bit words (bits were expensive back then). The system had six layers, as shown in Fig.
1-25. Layer 0 dealt with allocation of the processor, switching between processes when
interrupts occurred or timers expired.

Above layer 0, the system consisted of sequential processes, each of which could be programmed
without having to worry about the fact that multiple processes were running on a single
processor.

In other words, layer 0 provided the basic multiprogramming of the CPU. Layer 1 did the
memory management. It allocated space for processes in main memory and on a 5I2K word
drum used for holding parts of processes (pages) for which there was no room in main memory.

Above layer 1, processes did not have to worry about whether they were in memory or on the
drum; the layer 1 software took care of making sure pages were brought into memory whenever
they were needed. Layer 2 handled communication between each process and the operator
console (that is, the user). On top of this layer each process effectively had its own
operator console. Layer 3 took care of managing the I/O devices and buffering the information
streams to and from them. Above layer 3 each process could deal with abstract I/O devices with
nice properties, instead of real devices with many peculiarities. Layer 4 was where the user
programs were found. They did not have to worry about process, memory, console, or I/O
management. The system operator process was located in layer 5.

Kernel
The kernel is a computer program at the core of a computer's operating system with complete
control over everything in the system. It is an integral part of any operating system. It is the
"portion of the operating system code that is always resident in memory"

A Kernel is a computer program that is the heart and core of an Operating System. Since the
Operating System has control over the system so, the Kernel also has control over everything
in the system. It is the most important part of an Operating System.

Whenever a system starts, the Kernel is the first program that is loaded after the bootloader
because the Kernel has to handle the rest of the thing of the system for the Operating System.
The Kernel remains in the memory until the Operating System is shut-down.

The Kernel is responsible for low-level tasks such as disk management, memory management,
task management, etc. It provides an interface between the user and the hardware components
of the system. When a process makes a request to the Kernel, then it is called System Call.

A Kernel is provided with a protected Kernel Space which is a separate area of memory and
this area is not accessible by other application programs. So, the code of the Kernel is loaded
into this protected Kernel Space. Apart from this, the memory used by other applications is
called the User Space. As these are two different spaces in the memory, so communication
between them is a bit slower.

Functions of a Kernel
Following are the functions of a Kernel:

 Access Computer resource: A Kernel can access various computer resources like the
CPU, I/O devices and other resources. It acts as a bridge between the user and the
resources of the system.
 Resource Management: It is the duty of a Kernel to share the resources between
various process in such a way that there is uniform access to the resources by every
process.
 Memory Management: Every process needs some memory space. So, memory must
be allocated and deallocated for its execution. All these memory management is done
by a Kernel.
 Device Management: The peripheral devices connected in the system are used by the
processes. So, the allocation of these devices is managed by the Kernel.
Monolithic Systems
In this approach the entire operating system runs as a single program in kernel mode. The
operating system is written as a collection of procedures, linked together into a single large
executable binary program.

When this technique is used, each procedure in the system is free to call any other one, if the
latter provides some useful computation that the former needs. Having thousands of procedures
that can call each other without restriction often leads to an unwieldy and difficult to understand
system.

To construct the actual object program of the operating system when this approach is used, one
first compiles all the individual procedures (or the files containing the procedures) and then binds
them all together into a single executable file using the system linker.

In terms of information hiding, there is essentially none—every procedure is visible to every


other procedure (as opposed to a structure containing modules or packages, in which much of the
information is hidden away inside modules, and only the officially designated entry points can be
called from outside the module).

Even in monolithic systems, however, it is possible to have some structure. The services (system
calls) provided by the operating system are requested by putting the parameters in a well-defined
place (e.g., on the stack) and then executing a trap instruction.

This instruction switches the machine from user mode to kernel mode and transfers control to the
operating system. The operating system then fetches the parameters and determines which
system call is to be carried out. After that, it indexes into a table that contains in slot k a pointer
to the procedure that carries out system call k. This organization suggests a basic structure for the
operating system:

1. A main program that invokes the requested service procedure.


2. A set of service procedures that carry out the system calls.
3. A set of utility procedures that help the service procedures.
In this model, for each system call there is one service procedure that takes care of it and
executes it. The utility procedures do things that are needed by several

In addition to the core operating system that is loaded when the computer is booted, many
operating systems support loadable extensions, such as I/O device drivers and file systems.
These components are loaded on demand

Microkernels
When Operating System expanded in monolithic, the kernel became large and difficult to
manage. In the mid-1980s, researchers at Carnegie Mellon University developed an operating
system called Mach that modularized the kernel using the microkernel approach.

This method structures the operating system by removing all nonessential components from the
kernel and implementing them as system and user-level programs. The result is a smaller kernel.

There is little consensus regarding which services should remain in the kernel and which should
be implemented in user space. Typically, however, microkernels provide minimal process and
memory management, in addition to a communication facility. Figure 2.14 illustrates the
architecture of a typical microkernel.

The main function of the microkernel is to provide communication between the client program
and the various services that are also running in user space. Communication is provided through
message passing, which was described in Section 2.4.5. For example, if the client program
wishes to access a file, it must interact with the file server. The client program and service never
interact
directly. Rather, they communicate indirectly by exchanging messages with the microkernel
Micro Kernel Handle interrupt
process Scheduling and IPC

One benefit of the microkernel approach is that it makes extending the operating system easier.
All new services are added to user space and consequently do not require modification of the
kernel. When the kernel does have to be modified, the changes tend to be fewer, because the
microkernel is a smaller kernel.

The resulting operating system is easier to port from one hardware design to another. The
microkernel also provides more security and reliability, since most services are running as user—
rather than kernel—processes. If a service fails, the rest of the operating system remains
untouched.

Some contemporary operating systems have used the microkernel approach. Tru64 UNIX
(formerly Digital UNIX) provides a UNIX interface to the user, but it is implemented with a
Mach kernel. The Mach kernel maps UNIX system calls into messages to the appropriate user-
level services. The Mac OS X kernel (also known as Darwin) is also partly based on the Mach
microkernel.
Another example is QNX, a real-time operating system for embedded systems. The QNX
Neutrino microkernel provides services for message passing and process scheduling. It also
handles low-level network communication and hardware interrupts. All other services in QNX
are provided by standard processes that run outside the kernel in user mode. Unfortunately, the
performance of microkernels can suffer due to increased system-function overhead.

Client-Server Model
A slight variation of the microkernel idea is to distinguish two classes of processes, the servers,
each of which provides some service, and the clients, which use these services. This model is
known as the client-server model. Often the lowest layer is a microkernel, but that is not
required. The essence is the presence of client processes and server processes.
Communication between clients and servers is often by message passing. To obtain a service, a
client process constructs a message saying what it wants and sends it to the appropriate service.
The service then does the work and sends back the answer. If the client and server run on the
same machine, certain optimizations are possible, but conceptually, we are talking about
message passing here.

An obvious generalization of this idea is to have the clients and servers run on different
computers, connected by a local or wide-area network, as depicted in Fig. 1-27. Since clients
communicate with servers by sending messages, the clients need not know whether the messages
are handled locally on their own machines, or whether they are sent across a network to servers
on a remote machine.

As far as the client is concerned, the same thing happens in both cases: requests are sent and
replies come back. Thus the client-server model is an abstraction that can be used for a single
machine or for a network of machines. Increasingly many systems involve users at their home
PCs as clients and large machines elsewhere running as servers.

In fact, much of the Web operates this way. A PC sends a request for a Web page to the server
and the Web pagecomes back. This is a typical use of the client-server model in a network.

Virtual Machines

A virtual machine (VM) is an operating system (OS) or application environment that is installed
on software, which imitates dedicated hardware. The end user has the same experience on a
virtual machine as they would have on dedicated hardware.
Specialized software, called a hypervisor, emulates the PC client or server's CPU, memory, hard
disk, network and other hardware resources completely, enabling virtual machines to share the
resources.
The hypervisor can emulate multiple virtual hardware platforms that are isolated from each
other, allowing virtual machines to run Linux and Windows Server operating systems on the
same underlying physical host. Virtualization limits costs by reducing the need for physical
hardware systems.
Virtual machines more efficiently use hardware, which lowers the quantities of hardware and
associated maintenance costs, and reduces power and cooling demand. They also ease
management because virtual hardware does not fail.
Administrators can take advantage of virtual environments to simplify backups, disaster
recovery, new deployments and basic system administration tasks.

In computing, a virtual machine (VM) is an emulation of a computer system. Virtual machines


are based on computer architectures and provide functionality of a physical computer. Their
implementations may involve specialized hardware, software, or a combination.
There are different kinds of virtual machines, each with different functions:
 System virtual machines (also termed full virtualization VMs) provide a substitute for a
real machine. They provide functionality needed to execute entire operating systems.
A hypervisor uses native execution to share and manage hardware, allowing for multiple
environments which are isolated from one another, yet exist on the same physical
machine. Modern hypervisors use hardware-assisted virtualization, virtualization-specific
hardware, primarily from the host CPUs.
 Process virtual machines are designed to execute computer programs in a platform-
independent environment.
Some virtual machines, such as QEMU, are designed to also emulate different architectures and
allow execution of software applications and operating systems written for another CPU or
architecture. Operating-system-level virtualization allows the resources of a computer to be
partitioned via the kernel. The terms are not universally interchangeable.

Uses for Virtual Machines


Test new versions of operating systems: You can try out Windows 10 on your Windows 7
computer if you aren't willing to upgrade yet.
Experiment with other operating systems: Installing various distributions of Linux in a virtual
machine lets you experiment with them and learn how they work. And running macOS on
Windows 10 in a virtual machine lets you get used to a different operating system you're
considering using full-time.
Use software requiring an outdated operating system: If you’ve got an important application that
only runs on Windows XP, you can install XP in a virtual machine and run the application there.
This allows you to use an application that only works with Windows XP without actually
installing it on your computer. This is important since Windows XP no longer receives support
from Microsoft.
Run software designed for another operating systems: Mac and Linux users can run
Windows in a virtual machine to use Windows software on their computers without
compatibility headache. Unfortunately, games are a problem. Virtual machine programs
introduce overhead and 3D games will not run smoothly in a VM.
Test software on multiple platforms: If you need to test whether an application works on
multiple operating systems, you can install each in a virtual machine.
Consolidate servers: For businesses running multiple servers, they can place some into virtual
machines and run them on a single computer. Each virtual machine is an isolated container, so
this doesn’t introduce the security headaches involved with running different servers on the same
operating system. The virtual machines can also be moved between physical servers.

Exokernels
Exokernel is an operating system developed at the MIT that provides application-level
management of hardware resources. This architecture is designed to separate resource protection
from management to facilitate application-specific customization.
Some of the features of exokernel operating systems include:
 Better support for application control
 Separates security from management
 Abstractions are moved securely to an untrusted library operating system
 Provides a low-level interface
 Library operating systems offer portability and compatibility

Principles of Exokernels
1. Separate protection and management: Resource management is restricted to functions
necessary for protection.
2. Expose allocation: Applications allocate resources explicitly.
3. Expose name: Exokernels use physical names wherever possible.
4. Expose revocation: Exokernels let applications to choose which instance of a resource to give
up.
5. Expose information: Exokernels expose all system information and collect data that
applications cannot easily derive locally.
Advantages of Exokernels
Significant performance increase.
Applications can make more efficient and intelligent use of hardware resources by being aware
of resource availability, revocation and allocation.
Ease development and testing of new operating system ideas. (New scheduling techniques,
memory management methods, etc)
Disadvantages of Exokernels
1. Complexity in design of exokernel interfaces.
2. Less consistency.
Shell
The program between the user and the kernel is known as the shell. It translates the many
commands that are typed into the terminal session. These commands are known as the shell
script.
There are two major types of shells in Unix. These are Bourne shell and C Shell. The Bourne
shell is the default shell for version 7 Unix.
The character $ is the default prompt for the Bourne shell. The C shell is a command processor
that is run in a text window. The character % is the default prompt for the C shell
Both the Shell and the Kernel are the Parts of this Operating System. These Both Parts are used
for performing any Operation on the System.
When a user gives his Command for Performing Any Operation, then the Request Will goes to
the Shell Parts, The Shell Parts is also called as the Interpreter which translate the Human
Program into the Machine Language and then the Request will be transferred to the Kernel.
So that Shell is just called as the interpreter of the Commands which Converts the Request of the
User into the Machine Language.
features of shells
a. Wildcard substitution in file names (pattern-matching)
Carries out commands on a group of files by specifying a pattern to match, rather than specifying
an actual file name.
b. Command aliasing
Gives an alias name to a command or phrase. When the shell encounters an alias on the
command line or in a shell script, it substitutes the text to which the alias refers.
c. Command history
Records the commands you enter in a history file. You can use this file to easily access, modify,
and reissue any listed command.
d. File name substitution
Automatically produces a list of file names on a command line using all pattern-matching
characters.
e. Input and output redirection
Redirects input away from the keyboard and redirects output to a file or device other than the
terminal. For example, input to a program can be provided from a file and redirected to the
printer or to another file.
f. Piping
Links any number of commands together to form a complex program. The standard output of one
program becomes the standard input of the next.
g. Shell variable substitution
Stores data in user-defined variables and predefined shell variables.

Some most popular commercial shells are: -


 Korn shell
 Bourne Shell
 C shell
 POSIX shell

Korn shell: Korn shell is basically a UNIX shell that was developed by David Korn in Bell
Labs. It has many features of C shell (csh) and Bourne Shell (bsh) and is one of the most
efficient shells.

Bourne shell: A Bourne shell (sh) is a UNIX shell or command processor that is used for
scripting. It was developed in 1977 by Stephen Bourne of AT&T and introduced in UNIX
Version 7, replacing the Mashey shell (sh).
The Bourne shell is also known by its executable program name, "sh" and the dollar symbol, "$,"
which is used with command prompts.

C shell: C shell is a unix shell created by Bill Joy.at the University of California at Berkeley as
an alternative to UNIX’s original shell, the Bourne shell. These two UNIX shells, along with the
Korn shell, are the three most commonly used shells. The C shell program name is csh. The C
shell was invented for programmers who prefer a syntax similar to that of the C language.

POSIX shell: POSIX is an acronym for “Portable Operating System Interface”. POSIX shell is
based on the standard defined in Portable Operating System Interface (POSIX) – IEEE P1003.2.
It is a set of standards codified by the IEEE and issued by ANSI and ISO. POSIX makes task of
cross-platform software development easy

You might also like