Professional Documents
Culture Documents
It should be noted that the operating system’s real customers are the application programs (via the
application programmers, of course). They are the ones who deal directly with the operating system
and its abstractions. In contrast, end users deal with the abstractions provided by the user interface,
either a command- line shell or a graphical interface. While the abstractions at the user interface may
be similar to the ones provided by the operating system, this is not always the case. To make this
point clearer, consider the normal Windows desktop and the line-oriented command prompt. Both
are programs running on the Windows operating system and use the abstractions Windows provides,
but they offer very different user interfaces. Similarly, a Linux user running Gnome or KDE sees a
very different interface than a Linux user working directly on top of the underlying X Window
System, but the underlying operating system abstractions are the same in both cases.
b. Write a short note on fifth generation Operating System.
1.2.5 The Fifth Generation (1990–Present):
The first true handheld phone appeared in the 1970s
Nokia released the N9000, which literally combined two, mostly separate devices: a phone and a
PDA (Personal Digital Assistant). In 1997, Ericsson coined the term smartphone for its GS88
‘‘Penelope.’’
Most smartphones in the first decade after their inception were running Symbian OS. It was the
operating system of choice for popular brands like Samsung, Sony Ericsson, Motorola, and
especially Nokia.
Operating systems like RIM’s Blackberry OS (introduced for smartphones in 2002) and
Apple’s iOS (released for the first iPhone in 2007) started eating into Symbian’s market share.
In 2011, Nokia ditched Symbian and announced it would focus on Windows
Phone as its primary platform. For some time, Apple and RIM were the toast of the town (although
not nearly as dominant as Symbian had been), but it did not take very long for Android, a Linux-
based operating system released by Google in 2008For phone manufacturers, Android had the
advantage that it was open source and available under a permissive license. As a result, they could
tinker with it and adapt it to their own hardware with ease. Also, it has a huge community of
developers writing apps, mostly in the familiar Java programming language. Even so, the past years
have shown that the dominance may not last, and Android’s competitors are eager to claw back
some of its market share.
c. Explain the micro kernel approach of Operating System design.
The basic idea behind the microkernel design is to achieve high reliability by splitting the operating
system up into small, well-defined modules, only one of which—the microkernel—runs in kernel
mode and the rest run as relatively powerless ordinary user processes.
In particular, by running each device driver and file system as a separate user process, a bug in one
of these can crash that component, but cannot crash the entire system. Thus a bug in the audio driver
will cause the sound to be garbled or stop, but will not crash the computer.
A few of the better-known microkernels include Integrity, K42, L4, PikeOS, QNX, Symbian, and
MINIX 3
The MINIX 3 microkernel is only about 12,000 lines of C and some 1400 lines of assembler for very
low-level functions such as catching interrupts and switching processes. The C code manages and
schedules processes, handles interprocess communication (by passing messages between processes),
and offers a set of about 40 kernel calls to allow the rest of the operating system to do its work.
These calls perform functions like hooking handlers to interrupts, moving data between address
spaces, and installing memory maps for new processes. The process structure of MINIX 3 is shown
in Fig. 1-26, with the kernel call handlers labeled Sys.
Outside the kernel, the system is structured as three layers of processes all running in user mode. The
lowest layer contains the device drivers. Since they run in user mode, they do not have physical
access to the I/O port space and cannot issue I/O commands directly. Instead, to program an I/O
device, the driver builds a structure telling which values to write to which I/O ports and makes a
kernel call telling the kernel to do the write. This approach means that the kernel can check to see
that the driver is writing (or reading) from I/O it is authorized to use. Consequently (and unlike a
monolithic design), a buggy audio driver cannot accidentally write on the disk.
d. List and explain any five system calls use in file management.
Many system calls relate to the file system. Any five – 1 mark each with explaination.
e. Explain process states and possible transitions among these states using diagram.
In Fig. 2-2 we see a state diagram showing the three states a process may be in:
1. Running (actually using the CPU at that instant).
2. Ready (runnable; temporarily stopped to let another process run).
3. Blocked (unable to run until some external event happens).
The core problem here is that the two programs both reference absolute physical memory. That is
not what we want at all. What we want is that each program can reference a private set of addresses
local to it.
Contiguous disk-space allocation has two significant advantages. First, it is simple to implement
because keeping track of where a file’s blocks are is reduced to remembering two numbers: the disk
address of the first block and the number of blocks in the file. Given the number of the first block,
the number of any other block can be found by a simple addition.
Second, the read performance is excellent because the entire file can be read from the disk in a single
operation. Only one seek is needed (to the first block). After that, no more seeks or rotational delays
are needed, so data come in at the full bandwidth of the disk. Thus contiguous allocation is simple to
implement and has high performance.
Unfortunately, contiguous allocation also has a very serious drawback: over the course of time, the
disk becomes fragmented.
e. Define Deadlock. List the four conditions that must hold for there to be a deadlock.
Deadlock can be defined formally as follows: (1 mark)
A set of processes is deadlocked if each process in the set is waiting for an event that only another
process in the set can cause.
Coffman et al. (1971) showed that four conditions must hold for there to be a (resource) deadlock:
1. Mutual exclusion condition. Each resource is either currently assigned to exactly one process or is
available.
2. Hold-and-wait condition. Processes currently holding resources that were granted earlier can
request new resources.
3. No-preemption condition. Resources previously granted cannot be forcibly taken away from a
process. They must be explicitly released by the process holding them.
4. Circular wait condition. There must be a circular list of two or more processes, each of which is
waiting for a resource held by the next member of the chain.
f. Explain recovery from deadlock through preemption and rollback.
Recovery through Preemption ( 2marks or 3 marks according to the contents)
In some cases it may be possible to temporarily take a resource away from its current owner and give
it to another process. In many cases, manual intervention may be required, especially in batch-
processing operating systems running on mainframes.
For example, to take a laser printer away from its owner, the operator can collect all the sheets
already printed and put them in a pile. Then the process can be suspended (marked as not runnable).
At this point the printer can be assigned to another process. When that process finishes, the pile of
printed sheets can be put back in the printer’s output tray and the original process restarted.
The ability to take a resource away from a process, have another process use it, and then give it back
without the process noticing it is highly dependent on the nature of the resource. Recovering this
way is frequently difficult or impossible. Choosing the process to suspend depends largely on which
ones have resources that can easily be taken back.
Recovery through Rollback (3marks or 2 marks according to the contents)
If the system designers and machine operators know that deadlocks are likely, they can arrange to
have processes checkpointed periodically. Checkpointing a process means that its state is written to
a file so that it can be restarted later. The checkpoint contains not only the memory image, but also
the resource state, in other words, which resources are currently assigned to the process. To be most
effective, new checkpoints should not overwrite old ones but should be written to new files, so as the
process executes, a whole sequence accumulates. When a deadlock is detected, it is easy to see
which resources are needed. To do the recovery, a process that owns a needed resource is rolled back
to a point in time before it acquired that resource by starting at one of its earlier checkpoints.
All the work done since the checkpoint is lost (e.g., output printed since the checkpoint must be
discarded, since it will be printed again). In effect, the process is reset to an earlier moment when it
did not have the resource, which is now assigned to one of the deadlocked processes. If the restarted
process tries to acquire the resource again, it will have to wait until it becomes available.
Clouds that offer direct access to a virtual machine, which the user can use in any way he sees fit.
Thus, the same cloud may run different operating systems, possibly on the same hardware. In cloud
terms, this is known as IAAS (Infrastructure As A Service), as opposed to PAAS (Platform As A
Service, which delivers an environment that includes things such as a specific OS, database, Web
server, and so on), SAAS (Software As A Service,which offers access to specific software, such as
Microsoft Office 365, or Google Apps), and many other types of as-a-service.
The kernel sits directly on the hardware and enables interactions with I/O devices and the memory
management unit and controls CPU access to them. At the lowest level, as shown in Fig. 10-3 it
contains interrupt handlers, which are the primary way for interacting with devices, and the low-
level dispatching mechanism. This dispatching occurs when an interrupt happens. The low-level
code here stops the running process, saves its state in the kernel process structures, and starts the
appropriate driver. Process dispatching also happens when the kernel completes some operations and
it is time to start up a user process again. The dispatching code is in assembler and is quite distinct
from scheduling. Next, we divide the various kernel subsystems into three main components.
The I/O component in Fig. 10-3 contains all kernel pieces responsible for interacting with devices
and performing network and storage I/O operations. At the highest level, the I/O operations are all
integrated under a VFS (Virtual File System) layer. That is, at the top level, performing a read
operation on a file.
At the lowest level, all I/O operations pass through some device driver. All Linux drivers are
classified as either character-device drivers or block-device drivers, the main difference being that
seeks and random accesses are allowed on block devices and not on character devices.
Above the device-driver lev el, the kernel code is different for each device type.
Character devices may be used in two different ways. Some programs, such as
visual editors like vi and emacs, want every keystroke as it is hit. Raw terminal
(tty) I/O makes this possible. Other software, such as the shell, is line oriented, allowing
users to edit the whole line before hitting ENTER to send it to the program.
In this case the character stream from the terminal device is passed through a socalled
line discipline, and appropriate formatting is applied.
On top of the disk drivers is the I/O scheduler, which is responsible for ordering
and issuing disk-operation requests in a way that tries to conserve wasteful disk
head movement or to meet some other system policy.
b. Explain the booting of Linux operating system.
When the computer starts, the BIOS performs Power-
On-Self-Test (POST) and initial device discovery and initialization, since the
OS’ boot process may rely on access to disks, screens, keyboards, and so on. Next,
the first sector of the boot disk, the MBR (Master Boot Record), is read into a
fixed memory location and executed. This sector contains a small (512-byte) program
that loads a standalone program called boot from the boot device, such as a
SATA or SCSI disk. The boot program first copies itself to a fixed high-memory
address to free up low memory for the operating system.
Once moved, boot reads the root directory of the boot device. To do this, it
must understand the file system and directory format, which is the case with some
bootloaders such as GRUB (GRand Unified Bootloader). Other popular bootloaders,
such as Intel’s LILO(LInux LOader ).
Then boot reads in the operating system kernel and jumps to it. At this point,
it has finished its job and the kernel is running.
The kernel start-up code is written in assembly language and is highly machine
dependent. Typical work includes setting up the kernel stack, identifying the CPU
type, calculating the amount of RAM present, disabling interrupts, enabling the
MMU, and finally calling the C-language main procedure to start the main part of
the operating system.
Next the kernel data structures are allocated. Most are of fixed size, but a few,
such as the page cache and certain page table structures, depend on the amount of
RAM available.
Once all the hardware has been configured, the next thing to do is to carefully
handcraft process 0, set up its stack, and run it. Process 0 continues initialization,
doing things like programming the real-time clock, mounting the root file system,
and creating init (process 1) and the page daemon (process 2).
Init checks its flags to see if it is supposed to come up single user or multiuser.
Then it reads /etc/ttys,
which lists the terminals and some of their properties. For each enabled terminal, it
forks off a copy of itself, which does some housekeeping and then executes a program
called getty.
Getty sets the line speed and other properties for each line (some of which may
be modems, for example), and then displays
login: Login then asks for a password
If it is correct, login replaces itself with the user’s shell, which then
waits for the first command.
c. List and explain the design goals of android operating system.
Design Goals ( any five 1 mark each)
A number of key design goals for the Android platform evolved during its development:
1. Provide a complete open-source platform for mobile devices.
2. Strongly support proprietary third-party applications with a robust and stable API.
3. Allow all third-party applications
4. Provide an application security model
5. Support typical mobile user interaction:
6. Manage application processes for users, simplifying the user experience
7. Encourage applications to interoperate and collaborate in rich and secure ways.
8. Create a full general-purpose operating system.
d. Write a note on hardware abstraction layer in windows operating system structure.
The Hardware Abstraction Layer:
One goal of Windows is to make the system portable across hardware platforms.
Ideally, to bring up an operating system on a new type of computer system
it should be possible to just recompile the operating system on the new platform.
Unfortunately, it is not this simple. While many of the components in some layers
of the operating system can be largely portable (because they mostly deal with internal
data structures and abstractions that support the programming model), other
layers must deal with device registers, interrupts, DMA, and other hardware features
that differ significantly from machine to machine.
Most of the source code for the NTOS kernel is written in C rather than assembly
language (only 2% is assembly on x86, and less than 1% on x64). However, all
this C code cannot just be scooped up from an x86 system, plopped down on, say,
an ARM system, recompiled, and rebooted owing to the many hardware differences
between processor architectures that have nothing to do with the different instruction
sets and which cannot be hidden by the compiler. Languages like C make
it difficult to abstract away some hardware data structures and parameters, such as
the format of page-table entries and the physical memory page sizes and word
length, without severe performance penalties. All of these, as well as a slew of
hardware-specific optimizations, would have to be manually ported even though
they are not written in assembly code.
Hardware details about how memory is organized on large servers, or what
hardware synchronization primitives are available, can also have a big impact on
higher levels of the system. For example, NT’s virtual memory manager and the
kernel layer are aware of hardware details related to cache and memory locality.
Throughout the system NT uses compare&swap synchronization primitives, and it
would be difficult to port to a system that does not have them. Finally, there are
many dependencies in the system on the ordering of bytes within words. On all the
systems NT has ever been ported to, the hardware was set to little-endian mode.
Besides these larger issues of portability, there are also minor ones even between
different parentboards from different manufacturers. Differences in CPU
versions affect how synchronization primitives like spin-locks are implemented.
There are several families of support chips that create differences in how hardware
interrupts are prioritized, how I/O device registers are accessed, management of
DMA transfers, control of the timers and real-time clock, multiprocessor synchronization,
working with firmware facilities such as ACPI (Advanced Configuration
and Power Interface), and so on. Microsoft made a serious attempt to hide these
types of machine dependencies in a thin layer at the bottom called the HAL, as
mentioned earlier. The job of the HAL is to present the rest of the operating system
with abstract hardware that hides the specific details of processor version, support
chipset, and other configuration variations. These HAL abstractions are presented
in the form of machine-independent services (procedure calls and macros)
that NTOS and the drivers can use.
e. Explain using suitable diagram NTFS master file table and its attribute.
Windows supports several file systems, the most important of which are FAT-16, FAT-32, and
NTFS (NT File System).
Each NTFS volume (e.g., disk partition) contains files, directories, bitmaps, and other data
structures.
The principal data structure in each volume is the MFT (Master File Table), which is a linear
sequence of fixed-size 1-KB records. Each MFT record describes one file or one directory. It
contains the file’s attributes, such as its name and timestamps, and the list of disk addresses where its
blocks are located. If a file is extremely large, it is sometimes necessary to use two or more MFT
records to contain the list of all the blocks, in which case the first MFT record, called the base
record, points to the additional MFT records.
The MFT is itself a file and as such can be placed anywhere within the volume,
thus eliminating the problem with defective sectors in the first track. Furthermore,
the file can grow as needed, up to a maximum size of 248 records.
The MFT is shown in Fig. 11-39. Each MFT record consists of a sequence of
(attribute header, value) pairs. Each attribute begins with a header telling which
attribute this is and how long the value is. Some attribute values are variable
length, such as the file name and the data. If the attribute value is short enough to
fit in the MFT record, it is placed there. If it is too long, it is placed elsewhere on
the disk and a pointer to it is placed in the MFT record. This makes NTFS very efficient
for small files, that is, those that can fit within the MFT record itself.
The first 16 MFT records are reserved for NTFS metadata files, as illustrated
in Fig. 11-39. Each record describes a normal file that has attributes and data
blocks, just like any other file. Each of these files has a name that begins with a
dollar sign to indicate that it is a metadata file. The first record describes the MFT
file itself. In particular, it tells where the blocks of the MFT file are located so that
the system can find the MFT file. Clearly, Windows needs a way to find the first
block of the MFT file in order to find the rest of the file-system information. The
way it finds the first block of the MFT file is to look in the boot block, where its
address is installed when the volume is formatted with the file system.
f. Briefly explain windows power management.
The power manager rides herd on power usage throughout the system. Historically
management of power consumption consisted of shutting off the monitor
display and stopping the disk drives from spinning.
Newer power-management facilities include reducing the power consumption
of components when the system is not in use by switching individual devices to
standby states, or even powering them off completely using soft power switches.
Windows supports a special shut down mode called hibernation, which copies
all of physical memory to disk and then reduces power consumption to a small
trickle (notebooks can run weeks in a hibernated state) with little battery drain.
An alternative to hibernation is standby mode where the power manager reduces
the entire system to the lowest power state possible, using just enough power
to the refresh the dynamic RAM. Because memory does not need to be copied to
disk, this is somewhat faster than hibernation on some systems.
Despite the availability of hibernation and standby, many users are still in the
habit of shutting down their PC when they finish working. Windows uses hibernation
to perform a pseudo shutdown and startup, called HiberBoot, that is much faster
than normal shutdown and startup. When the user tells the system to shutdown,
HiberBoot logs the user off and then hibernates the system at the point they would
normally login again. Later, when the user turns the system on again, HiberBoot
will resume the system at the login point.
CS (connected standby). CS is possible on systems with special networking
hardware which is able to listen for traffic on a small set of connections using
much less power than if the CPU were running.
Many applications today are implemented with both local code and services in
the cloud. Windows provides WNS (Windows Notification Service) which allows
third-party services to push notifications to a Windows device in CS without requiring
the CS network hardware to specifically listen for packets from the third
party’s servers. WNS notifications can signal time-critical events, such as the arrival
of a text message or a VoIP call. When a WNS packet arrives, the processor
will have to be turned on to process it, but the ability of the CS network hardware
to discriminate between traffic from different connections means the processor
does not have to awaken for every random packet that arrives at the network interface.
_____________________________