Professional Documents
Culture Documents
Lecture01 OS I
Lecture01 OS I
Operating Systems
Computer System Structure
Computer system can be divided into four
components:
• CPU, memory, I/O devices
Operating Systems
The dominant desktop operating system is Microsoft Windows
with a market share of around 82.74%. macOS by Apple Inc. is
in second place (13.23%), and the varieties of Linux are
collectively in third place (1.57%).
What Operating SystemsDo??
In case of a single user system, the operating system is designed mostly for ease
of use, with some attention paid to performance and none paid to resource
utilization—how various hardware and software resources are shared.
In cases, where a user sits at a terminal connected to a mainframe or a
minicomputer, the OS is designed to maximize resource utilization— to assure
that all available CPU time, memory, and I/O are used efficiently and that no
individual user takes more than her fair share.
In some cases, users of workstations have dedicated resources, but frequently
use shared resources from servers. their operating system is designed to
compromise between individual usability and resource utilization.
Handheld computers are resource poor, and are optimized for usability and
battery life.
Some computers have little or no user interface, such as embedded computers in
devices and automobiles. The OS are designed to run without human
intervention.
What Operating SystemsDo??
• OS is a resource allocator
• Manages all resources
• Decidesbetween conflicting requests for
efficient and fair resource use
• OS is a control program
• Controls execution of programs to prevent errors and
improper use of the computer
Definition of OS
Operating Systems
Computer SystemOperation
For a computer to start running, when it is powered up or rebooted—it needs to
have an initial program to run called a .
It is stored within the computer hardware in or
known by
the general term firmware.
It initializes all aspects of the system, from CPU registers to device controllers
to memory contents.
To start execution of the OS, the bootstrap program must locate the Operating-
system kernel and load into memory.
Once the kernel is loaded and executing, it can start providing services to the
system and its users. Some services are provided outside of the kernel, by
system programs that are loaded into memory at boot time to become system
processes, or system daemons that run the entire time the kernel is running.
Once this phase is complete, the system is fully booted, and the system waits
for some event to occur.
The occurrence of an event is usually signaled by an from either
the hardware or the software.
Hardware may trigger an interrupt at any time by sending a signal to the CPU,
usually by way of the system bus.
Software may trigger an interrupt by executing a special operation called a
(also called a ).
When the CPU is interrupted, it stops what it is doing and immediately
transfers execution to a fixed location. The fixed location usually contains the
starting address where the service routine for the interrupt is located. The
interrupt service routine executes; on completion, the CPU resumes the
interrupted computation.
Storage Structure
The CPU can load instructions only from memory, so any programs
to run must be stored there.
General-purpose computers run most of
their programs from rewritable memory, called
(also called
or RAM). Main memory commonly is
implemented in a semiconductor technology
called
.
Operating Systems
Till recently, most systems use single processor systems. They
perform only one process at a given time, and it carries out the next
process in the queue only after the current process is completed.
Almost all single processor systems have other special-purpose
processors which may come in the form of device-specific
processors, such as disk, keyboard, and graphics controllers; or, on
mainframes, they may come in the form of more general-purpose
processors, such as I/O processors that move data rapidly among the
components of the system.
These processors execute only a limited system programs and do not run
the user program.
A single processor system is defined as “ A system that has
only one general purpose CPU, is considered as single
processor system.
Operating Systems
Some systems that have two or more processors are also known as parallel
systems or tightly coupled systems.
Mostly the processors of these systems share the common system bus,
memory and peripheral (input/output) devices. These systems are fast in
data processing and have capability to execute more than one program
simultaneously on different processors.
This type of processing is known as multiprogramming or multiprocessing.
Multiple processors further divided in two types.
Operating Systems
Multiprocessing operating system or the parallel system support the use
of more than one processor in close communication. The advantages of
the multiprocessing system are:
• Increased Throughput − By increasing the number of processors, more work
can be completed in a unit time.
• Cost Saving − Parallel system shares the memory, buses, peripherals etc.
Multiprocessor system thus saves money as compared to multiple single systems.
Also, if a number of programs are to operate on the same data, it is cheaper to
store that data on one single disk and shared by all processors instead of using
many copies of the same data.
• Increased Reliability − In this system, as the workload is distributed among
several processors which results in increased reliability. If one processor fails
then its failure may slightly slow down the speed of the system but system will
work smoothly.
Operating Systems
ꙮ Multiprocessing adds CPUs to increase computing
power. If the CPU has an integrated memory controller,
then adding CPUs can also increase the amount of
memory addressable in the system.
ꙮ Multiprocessing can cause a system to change its memory access
model from uniform memory access (UMA) to non-uniform memory
access (NUMA).
ꙮ A recent trend in CPU design is to include multiple
computing
cores on a single chip. Such multiprocessor
systems are termed multicore.
They can be more efficient than multiple chips with single cores
becauseon-chip communication is faster than
between-chip
communication.
In addition, one chip with multiple
cores uses significantly less power than multiple single-
ꙮ While multicore systems are multiprocessor systems, not all
multiprocessor systems are multicore.
Another type of multiprocessor system is a ,
which gathers together multiple CPUs. These systems are considered
loosely coupled.
Clustered systems differ from the multiprocessor systems in that they
are composed of two or more individual systems—or nodes—joined
together. Each node may be a single processor system or a multicore
system.
The definition of the term clustered is not concrete; the general
accepted definition is that clustered computers share storage and are
closely linked via LAN networking.
Clustering is usually used to provide
—that is, service will continue even if one or more systems in the
cluster fail.
Operating Systems
A layer of cluster software runs on the cluster nodes. Each node can
monitor one or more of the others. If the monitored machine fails, the
monitoring machine can take ownership of its storage, and restart the
application(s) that were running on the failed machine. The failed
machine can remain down, but the users and clients of the application
would only see a brief interruption of service.
Operating Systems
Clustering can be structured asymmetrically or symmetrically.
- In this, one machine is in hot-standby mode while
the other is running the applications. The hot-standby host (machine) does
nothing but monitor the active server. If that server fails, the hot standby
host becomes the active server.
- In this, two or more hosts are running
applications, and they are monitoring each other. This mode is obviously
more efficient, as it uses all of the available hardware.
Since a cluster consists of several computer
systems connected via a network, clusters can also be used to provide
high-performance computing environments. Such systems can supply
significantly greater computational power than single- processor/SMP
systems because they can run an application concurrently on all computers
in the cluster.
To take advantage to clustering, the application
must be written specifically. This involves a technique known as
parallelization, which divides a program into separate components that
run in parallel on individual computers in the cluster.
Clustered system's usage and it's features
should expand greatly as
Operating Systems