You are on page 1of 8

CHAPTER 2: OPERATING SYSTEM STRUCTURE

Introduction

The way components are organized within an operating system has a great impact on its effectiveness and its
efficiency. Thus, an important aspect to consider when designing an operating system is the way it should be
structured from within. However, internal structure of different Operating Systems can vary widely according to
many factors. In this chapter, we are going to describe the components as well as services an operating system
provides to users, and also discuss the various ways of structuring it.

I- Operating system components

1- Process management

A process is a program in execution. It needs certain resources, including CPU time, memory, files, and I/O devices,
to accomplish its task. The operating system is responsible for the following activities in connection with process
management:

Creation and deletion of processes.


Suspension and resumption of processes.
A mechanism for inter-process synchronization.
A mechanism for inter-process communication.
A mechanism for deadlock handling.

2- Main-memory management

Main-Memory is a large array of words or bytes, each word or byte having its own address. Main memory is a
repository of quickly accessible data shared by the CPU and I/O devices. The major activities of an operating system
in regard to memory-management are:

Keep track of which part of memory are currently being used and by whom
Keep track of unused(“free”) memory
Protect memory space
Decide which processes should be loaded into memory when space becomes available.
Allocate and deallocate memory space as needed.

3- File management

Secondary storage devices are too crude to use directly for long-term storage. The file system provides logical
objects and logical operations on those objects. A file is the basic long-term storage entity: it is a named collection
of persistent information that can be read or written. The file system supports directories, which are special files that
contain names of other files and associated file information. Commonly, files represent programs (both source and
object forms) and data. The operating system is responsible for the following activities in connections with file
management:

The creation and deletion of files and directories.


The support of primitives for manipulating files and directions.
The mapping files onto secondary storage.
The backup of files on stable (non volatile) storage media.

4- I/O system management


The OS provides a standard interface between programs (user or system) and devices. Device drivers are the
processes responsible for each device type. A driver encapsulates device-specific knowledge, e.g., for device
initiation and control, interrupt handling, and errors. There may be a process for each device, or even for each I/O
request, depending on the particular OS.

5- Secondary-storage management

Since main memory (primary storage) is volatile and too small to accommodate all data and programs permanently,
the computer system must provide secondary storage to back up main memory. Most modern computer systems use
disks as the principal on-line storage medium, for both programs and data. The operating system is responsible for
the following activities in connection with disk management:

Free space management


Storage allocation
scheduling of disk operations
head movement
Error handling, etc.

6- Protection system

Protection refers to a mechanism for controlling access by programs, processes, or users to both system and user
resources. Protection is a general mechanism throughout the OS. All resources objects need protection (memory,
processes, files, devices). Protection mechanisms help to detect errors as well as to prevent malicious destruction.
The protection mechanism must:

Distinguish between authorized and unauthorized usage.


Specify the controls to be imposed.

7- Command interpreter system

A command interpreter is an interface of the operating system presented to the user. The user gives commands
which are executed by the operating system (usually by turning them into system calls). On some systems, command
interpreter is a standard part of the OS.

Note: An OS consists of all of these components, plus lots of others.

II- Operating system services

• Program Execution

The system must be able to load a program into memory and run it, end its execution, either normally or abnormally
(indicating error)

• I/O Operations

Since user programs cannot execute I/O operations directly, the operating system must provide some means to
perform I/O.

• File System Manipulation

The output of a program may need to be written into new files or input taken from some files. The operating system
must provide this service.
• Error Detection

OS needs to be constantly aware of possible errors that May occur in the CPU and memory hardware, in I/O
devices, in user program. For each type of error, OS should take the appropriate action to ensure correct and
consistent computing.

Communications: Processes may exchange information, on the same computer or between computers over
a network. Communications may be via shared memory or through message passing (packets moved by
the OS)

Additional functions exist not for helping the user, but rather for ensuring efficient system operations.

Resource allocation: allocating resources to multiple users or multiple jobs running at the same time.

Accounting: keep track of and record which users use how much and what kinds of computer resources for
accumulating usage statistics.

Protection: ensuring that all access to system resources is controlled.

III- System calls

A System Call is the main way user programs interact with the Operating System to benefit from its services. They
are generally available as assembly-language instructions and are usually listed in the manuals used by assembly-
language programmers.

System calls are mostly accessed by programs via a high-level Application Program Interface (API) rather than
direct system call use. Three most common APIs are Win32 API for Windows, POSIX API for POSIX-based
systems (including virtually all versions of UNIX, Linux, and Mac OS X), and Java API for the Java virtual machine
(JVM).
1- System Call Parameter Passing

More Often, making a system call requires more information than simple identity of the desired system call. These
additional information are called parameters. Exact type and amount of parameters vary according to OS and system
call. Three general methods are used to pass parameters to the operating system by a running program:

The simplest method is to pass the parameters in registers. In some cases, the number of parameters may be
more than registers
Another way is to store the parameters in a table in memory, and the table address is passed as a parameter
in a register. This approach is taken by Linux and Solaris.
Example:

Parameters can also be pushed (stored) onto the stack by the program, and the stack popped off by
operating system.

Note: Table and stack methods do not limit the number or length of parameters being passed.

2- Types of System Calls:

System calls can be made to perform the following operations:

Process control, File management, Device management, Information maintenance, Communications.

3- System Call Implementation

Typically, a number is associated with each system call


System-call interface maintains a table indexed according to these numbers
The system call interface invokes intended system call in OS kernel and returns status of the system call
and any return values.
The caller doesn’t need to know anything about how the system call is implemented
The caller just needs to obey API and understand what OS will do as a result call
Managed by run-time support library (set of functions built into libraries included with compiler)

Standard C Library Example: C program invoking printf() library call, which calls write() system call.
IV- Operating system structure

An OS consists of all of the components listed above, plus lots of others, plus system service routines, plus ....But
the big issue are:

How do we organize all of this?


What are the entities and where do they exist?
How do these entities cooperate?

Basically, we find a way to build a complex system that’s: effective, reliable and extensible. In this section, we
examine six different structures that have been tried, in order to get some idea of the spectrum of possibilities. These
are by no means exhaustive, but they give an idea of some designs that have been tried in practice. The six structures
are monolithic structure, layered structure, microkernel structure, virtual machines, exokernel structure, and client-
server structure.

1- Monolithic system structure

In this approach, the entire OS runs as a single program in kernel mode. It is written as a collection of procedures
linked together into a single large executable binary program. When this technique is used, each procedure in the
system has a well-defined interface in terms of parameters and results, and each one is free to call any other one.
Although there is actually no structure, it is however possible to have at least a little structure presented as follows:

1. A main program that invokes the requested service procedure.

2. A set of service procedures that carry out the system calls.

3. A set of utility procedures that help the service procedures.

In this model, for each system call there is one service procedure that takes care of it and executes it. The utility
procedures do things that are needed by several service procedures, such as fetching data from user programs. This
division of the procedures into three layers is shown in Fig.

Fig: A simple structuring model for a monolithic system.

Traditionally, systems such as Unix were built as a monolithic kernel:


Problems with monolithic kernels:

➢ hard to understand
➢ hard to modify
➢ unreliable: a bug anywhere causes a system crash
➢ hard to maintain

2- Layered Approach

Since the beginnings of OS design, people have sought ways to organize the OS to simplify its design and
construction. The Traditional approach is layering: The operating system is divided into a number of layers (levels),
each built on top of lower layers. With modularity, layers are selected such that each uses functions (operations) and
services of only lower-level layers.

The first system constructed in this way was the THE system built at the Technische Hogeschool Eindhoven in the
Netherlands by Edsger Dijkstra and his students in 1968. The system had 6 layers, as shown in Fig.

Layer Function

5 The operator

4 User programs

3 Input/output management

2 Operator-process communication

1 Memory management

0 Processor allocation and multiprogramming

Figure Structure of the THE operating system.

Layer 0 dealt with allocation of the Processor. Above layer 0, the system consisted of sequential processes, each
performing a sequential computation. They didn’t have to worry about the fact that multiple processes were
running on a single processor.

Layer 1 did the memory management.


Layer 2 handled communication between each process and the operator console. Above this layer each process
effectively had its own operator console. Layer 3 took care of managing the I/O devices . Above layer 3 each
process could deal with abstract I/O devices with nice properties, instead of real devices with many peculiarities.
Layer 4 was where the user programs were found. They did not have to worry about process, memory, console, or
I/O management. The system operator process was located in layer 5.

Each level sees a logical machine provided by lower levels.

level 1 sees “virtual processors”

level 2 sees “Virtual” Memory

level 3 sees a “virtual console”

level 4 sees “virtual” I/O drivers

Note: This approach is not flexible and often has poor performance due to layer crossings.

Exercise: Describe the way Windows 2000 is structured.

3- Microkernel System Structure

Microkernel approach is the organizing structure currently in vogue. The goal is to minimize what goes in the
kernel, and implement much of the OS as user-level processes. This results in:

Better reliability
Ease of extension and customization

In fact, traditionally, with the layered approach, all the layers went in the kernel, but that is not necessary. It would
have been better to put as little as possible in kernel mode because bugs in the kernel can bring down the system
instantly. The idea behind the microkernel design is to achieve high reliability by splitting the OS up into small,
well-defined modules and only one of which, the microkernel, runs in kernel mode. Thus, the other modules just run
as relatively powerless ordinary user processes. In particular, by running each device driver and file system as a
separate user process, a bug in one of these can crash that component, but not the entire system. A few of the better-
known microkernels are: Integrity, K42, L4, PikeOS, QNX, Symbian, and MINIX3.

However, there is performance overhead due to user space-kernel space communication.

Exercise: Describe the way MINIX3 operating system is structured.

4- Client-server model

A slight variation of the microkernel idea is to distinguish two classes of processes, the servers, each of which
provides some services, and the clients, which use these services. This is known as the client-server model. Clients
and servers communicate through message passing. To obtain a service, a client process constructs a message saying
what it wants and sends it to the appropriate server process. The server does the work and sends back the answer.
Messages can be locally transmitted or across a network.
In this model, what the kernel does is to handle communication between clients and servers.

5- Virtual Machines

This approach consists of implementing into one machine many others. i.e one physical machine can hold many
machines called virtual machines, each of them running its set of processes. these virtual machines are exact copies
of the bare hardware, including kernel/user mode, I/O, interrupts, and everything else the real machine has. Because
each virtual machine is identical to the true hardware, each one can run any operating system that will run directly
on the bare hardware. Different virtual machines can run different operating systems. There are 2 approaches to
virtualization: Using a type 1 hypervisor(Virtual machine monitor) or a type 2 hypervisor.

In the first approach, there is only one program called Type 1 hypervisor, running on the bare hardware in kernel
mode. Its job is to provide multiple copies of the actual hardware called virtual machines.

In the second approach, there exists an OS (called host OS) actually running on the bare hardware. Type 2
hypervisor is just a user program running on the host OS that creates virtual machines.

In both cases, the OS running on top the hypervisor are called guest OS.

Note:

The virtual machine concept provides complete protection of system resources since each virtual machine is isolated
from all other virtual machines. But sharing is not possible

A virtual-machine system is a perfect vehicle for operating-systems research and development. System
development is done on the virtual machine, instead of on a physical machine and so does not disrupt normal system
operation.

The virtual machine concept is difficult to implement due to the effort required to provide an exact duplicate to the
underlying machine.

Virtual machines are useful for running different OS simultaneously on the same machine.

6- Exokernel approach

Rather than just cloning the actual machine, as is done with virtual machines, another strategy is partitioning it, i.e
giving each user a subset of the resources. At the bottom layer, running in kernel mode, is a program called the
exokernel. Its job is to allocate resources to virtual machines and then check attempts to use them to make sure no
machine is trying to use somebody else’s resources.

You might also like