You are on page 1of 21

Operating System

The computer's master control program. When the computer is turned on, a small
"boot program" loads the operating system. Although additional modules may be
loaded as needed, the main part, known as the "kernel" resides in memory at all times.

The operating system (OS) sets the standards for all application programs that run in
the computer. Applications "talk to" the operating system for all user interface and file
management operations. Also called an "executive" or "supervisor," an operating
system performs the following functions.

User Interface

All graphics based today, the user interface includes the windows, menus and method
of interaction between you and the computer. Prior to graphical user interfaces
(GUIs), all operation of the computer was performed by typing in commands. Not at
all extinct, command-line interfaces are alive and well and provide an alternate way of
running programs on all major operating systems.

Operating systems may support optional interfaces, both graphical and command line.
Although the overwhelming majority of people work with the default interfaces,
different "shells" offer variations of appearance and functionality.

Job Management

Job management controls the order and time in which programs are run and is more
sophisticated in the mainframe environment where scheduling the daily work has
always been routine. IBM's job control language (JCL) was developed decades ago.
In a desktop environment, batch files can be written to perform a sequence of
operations that can be scheduled to start at a given time.

Task Management

Multitasking, which is the ability to simultaneously execute multiple programs, is


available in all operating systems today. Critical in the mainframe and server
environment, applications can be prioritized to run faster or slower depending on their
purpose. In the desktop world, multitasking is necessary for keeping several
applications open at the same time so you can bounce back and forth among them.
See multitasking.

Data Management

Data management keeps track of the data on disk, tape and optical storage devices.
The application program deals with data by file name and a particular location within
the file. The operating system's file system knows where that data are physically
stored (which sectors on disk) and interaction between the application and operating
system is through the programming interface. Whenever an application needs to read
or write data, it makes a call to the operating system (see API).
Device Management

Device management controls peripheral devices by sending them commands in their


own proprietary language. The software routine that knows how to deal with each
device is called a "driver," and the OS requires drivers for the peripherals attached to
the computer. When a new peripheral is added, that device's driver is installed into the
operating system. See driver.

Security

Operating systems provide password protection to keep unauthorized users out of the
system. Some operating systems also maintain activity logs and accounting of the
user's time for billing purposes. They also provide backup and recovery routines for
starting over in the event of a system failure.

Operating System and Application Software


This diagram shows the components of the operating system and typical application
.programs that run in a desktop computer
Drivers and Peripherals
This diagram shows the interaction between the operating system, the drivers and the
peripheral devices

Process management

Every program running on a computer, be it a service or an application, is a process.


As long as a von Neumann architecture is used to build computers, only one process
per CPU can be run at a time. Older microcomputer OSes such as MS-DOS did not
attempt to bypass this limit, with the exception of interrupt processing, and only one
process could be run under them (although DOS itself featured TSR as a very partial
and not too easy to use solution). Mainframe operating systems have had multitasking
capabilities since the early 1960s. Modern operating systems enable concurrent
execution of many processes at once via multitasking even with one CPU. Process
management is an operating system's way of dealing with running multiple processes.
Since most computers contain one processor with one core, multitasking is done by
simply switching processes quickly. Depending on the operating system, as more
processes run, either each time slice will become smaller or there will be a longer
delay before each process is given a chance to run. Process management involves
computing and distributing CPU time as well as other resources. Most operating
systems allow a process to be assigned a priority which affects its allocation of CPU
time. Interactive operating systems also employ some level of feedback in which the
task with which the user is working receives higher priority. Interrupt driven
processes will normally run at a very high priority. In many systems there is a
background process, such as the System Idle Process in Windows, which will run
when no other process is waiting for the CPU.

Memory management

Current computer architectures arrange the computer's memory in a hierarchical


manner, starting from the fastest registers, CPU cache, random access memory and
disk storage. An operating system's memory manager coordinates the use of these
various types of memory by tracking which one is available, which is to be allocated
or deallocated and how to move data between them. This activity, usually referred to
as virtual memory management, increases the amount of memory available for each
process by making the disk storage seem like main memory. There is a speed penalty
associated with using disks or other slower storage as memory – if running processes
require significantly more RAM than is available, the system may start thrashing. This
can happen either because one process requires a large amount of RAM or because
two or more processes compete for a larger amount of memory than is available. This
then leads to constant transfer of each process's data to slower storage.

Another important part of memory management is managing virtual addresses. If


multiple processes are in memory at once, they must be prevented from interfering
with each other's memory (unless there is an explicit request to utilise shared
memory). This is achieved by having separate address spaces. Each process sees the
whole virtual address space, typically from address 0 up to the maximum size of
virtual memory, as uniquely assigned to it. The operating system maintains a page
table that match virtual addresses to physical addresses. These memory allocations are
tracked so that when a process terminates, all memory used by that process can be
made available for other processes.

The operating system can also write inactive memory pages to secondary storage.
This process is called "paging" or "swapping" – the terminology varies between
operating systems.

It is also typical for operating systems to employ otherwise unused physical memory
as a page cache; requests for data from a slower device can be retained in memory to
improve performance. The operating system can also pre-load the in-memory cache
with data that may be requested by the user in the near future; SuperFetch is an
example of this.

Disk and file systems

All operating systems include support for a variety of file systems.

Modern file systems comprise a hierarchy of directories. While the idea is


conceptually similar across all general-purpose file systems, some differences in
implementation exist. Two noticeable examples of this are the character used to
separate directories, and case sensitivity.

Unix demarcates its path components with a slash (/), a convention followed by
operating systems that emulated it or at least its concept of hierarchical directories,
such as Linux, Amiga OS and Mac OS X. MS-DOS also emulated this feature, but
had already also adopted the CP/M convention of using slashes for additional options
to commands, so instead used the backslash (\) as its component separator. Microsoft
Windows continues with this convention; Japanese editions of Windows use ¥, and
Korean editions use ₩[citation needed]. Versions of Mac OS prior to OS X use a colon (:)
for a path separator. RISC OS uses a period (.).

Unix and Unix-like operating systems allow for any character in file names other than
the slash (including line feed (LF) and other control characters). Unix file names are
case sensitive, which allows multiple files to be created with names that differ only in
case. By contrast, Microsoft Windows file names are not case sensitive by default.
Windows also has a larger set of punctuation characters that are not allowed in file
names.

File systems may provide journaling, which provides safe recovery in the event of a
system crash. A journaled file system writes information twice: first to the journal,
which is a log of file system operations, then to its proper place in the ordinary file
system. In the event of a crash, the system can recover to a consistent state by
replaying a portion of the journal. In contrast, non-journaled file systems typically
need to be examined in their entirety by a utility such as fsck or chkdsk. Soft updates
is an alternative to journalling that avoids the redundant writes by carefully ordering
the update operations. Log-structured file systems and ZFS also differ from traditional
journaled file systems in that they avoid inconsistencies by always writing new copies
of the data, eschewing in-place updates.
Many Linux distributions support some or all of ext2, ext3, ReiserFS, Reiser4, GFS,
GFS2, OCFS, OCFS2, and NILFS. Linux also has full support for XFS and JFS,
along with the FAT file systems, and NTFS.

Microsoft Windows includes support for FAT12, FAT16, FAT32, and NTFS. The
NTFS file system is the most efficient and reliable of the four Windows file systems,
and as of Windows Vista, is the only file system which the operating system can be
installed on. Windows Embedded CE 6.0 introduced ExFAT, a file system suitable for
flash drives.

Mac OS X supports HFS+ as its primary file system, and it supports several other file
systems as well, including FAT16, FAT32, NTFS and ZFS.

Common to all these (and other) operating systems is support for file systems
typically found on removable media. FAT12 is the file system most commonly found
on floppy discs. ISO 9660 and Universal Disk Format are two common formats that
target Compact Discs and DVDs, respectively. Mount Rainier is a newer extension to
UDF supported by Linux 2.6 kernels and Windows-Vista that facilitates rewriting to
DVDs in the same fashion as what has been possible with floppy disks.

Networking

Most current operating systems are capable of using the TCP/IP networking protocols.
This means that computers running dissimilar operating systems can participate in a
common network for sharing resources such as computing, files, printers, and
scanners using either wired or wireless connections.

Many operating systems also support one or more vendor-specific legacy networking
protocols as well, for example, SNA on IBM systems, DECnet on systems from
Digital Equipment Corporation, and Microsoft-specific protocols on Windows.
Specific protocols for specific tasks may also be supported such as NFS for file
access.

Security

Many operating systems include some level of security. Security is based on the two
ideas that:

• The operating system provides access to a number of resources, directly or


indirectly, such as files on a local disk, privileged system calls, personal
information about users, and the services offered by the programs running on
the system;
• The operating system is capable of distinguishing between some requesters of
these resources who are authorized (allowed) to access the resource, and
others who are not authorized (forbidden). While some systems may simply
distinguish between "privileged" and "non-privileged", systems commonly
have a form of requester identity, such as a user name. Requesters, in turn,
divide into two categories:
o Internal security: an already running program. On some systems, a
program once it is running has no limitations, but commonly the
program has an identity which it keeps and is used to check all of its
requests for resources.
o External security: a new request from outside the computer, such as a
login at a connected console or some kind of network connection. To
establish identity there may be a process of authentication. Often a
username must be quoted, and each username may have a password.
Other methods of authentication, such as magnetic cards or biometric
data, might be used instead. In some cases, especially connections
from the network, resources may be accessed with no authentication at
all.

In addition to the allow/disallow model of security, a system with a high level of


security will also offer auditing options. These would allow tracking of requests for
access to resources (such as, "who has been reading this file?").

Security of operating systems has long been a concern because of highly sensitive
data held on computers, both of a commercial and military nature. The United States
Government Department of Defense (DoD) created the Trusted Computer System
Evaluation Criteria (TCSEC) which is a standard that sets basic requirements for
assessing the effectiveness of security. This became of vital importance to operating
system makers, because the TCSEC was used to evaluate, classify and select
computer systems being considered for the processing, storage and retrieval of
sensitive or classified information.

Internal security

Internal security can be thought of as protecting the computer's resources from the
programs concurrently running on the system. Most operating systems set programs
running natively on the computer's processor, so the problem arises of how to stop
these programs doing the same task and having the same privileges as the operating
system (which is after all just a program too). Processors used for general purpose
operating systems generally have a hardware concept of privilege. Generally less
privileged programs are automatically blocked from using certain hardware
instructions, such as those to read or write from external devices like disks. Instead,
they have to ask the privileged program (operating system kernel) to read or write.
The operating system therefore gets the chance to check the program's identity and
allow or refuse the request.

An alternative strategy, and the only sandbox strategy available in systems that do not
meet the Popek and Goldberg virtualization requirements, is the operating system not
running user programs as native code, but instead either emulates a processor or
provides a host for a p-code based system such as Java.

Internal security is especially relevant for multi-user systems; it allows each user of
the system to have private files that the other users cannot tamper with or read.
Internal security is also vital if auditing is to be of any use, since a program can
potentially bypass the operating system, inclusive of bypassing auditing.
External security

Typically an operating system offers (or hosts) various services to other network
computers and users. These services are usually provided through ports or numbered
access points beyond the operating systems network address. Services include
offerings such as file sharing, print services, email, web sites, and file transfer
protocols (FTP), most of which can have compromised security.

At the front line of security are hardware devices known as firewalls or intrustion
detection/prevention systems. At the operating system level, there are a number of
software firewalls available, as well as intrusion detection/prevention systems. Most
modern operating systems include a software firewall, which is enabled by default. A
software firewall can be configured to allow or deny network traffic to or from a
service or application running on the operating system. Therefore, one can install and
be running an insecure service, such as Telnet or FTP, and not have to be threatened
by a security breach because the firewall would deny all traffic trying to connect to
the service on that port.

Graphical user interfaces

Today, most modern operating systems contain Graphical User Interfaces . A few
older operating systems tightly integrated the GUI to the kernel—for example, in the
original implementations of Microsoft Windows and Mac OS the Graphical
subsystem was actually part of the operating system. More modern operating systems
are modular, separating the graphics subsystem from the kernel (as is now done in
Linux, and Mac OS X) so that the graphics subsystem is not part of the OS at all.

Many operating systems allow the user to install or create any user interface they
desire. The X Window System in conjunction with GNOME or KDE is a commonly
found setup on most Unix and Unix derivative (BSD, Linux, Minix) systems.

Graphical user interfaces evolve over time. For example, Windows has modified its
user interface almost every time a new major version of Windows is released, and the
Mac OS GUI changed dramatically with the introduction of Mac OS X in 2001.

Device drivers

A device driver is a specific type of computer software developed to allow interaction


with hardware devices. Typically this constitutes an interface for communicating with
the device, through the specific computer bus or communications subsystem that the
hardware is connected to, providing commands to and/or receiving data from the
device, and on the other end, the requisite interfaces to the operating system and
software applications. It is a specialized hardware-dependent computer program
which is also operating system specific that enables another program, typically an
operating system or applications software package or computer program running
under the operating system kernel, to interact transparently with a hardware device,
and usually provides the requisite interrupt handling necessary for any necessary
asynchronous time-dependent hardware interfacing needs.
The key design goal of device drivers is abstraction. Every model of hardware (even
within the same class of device) is different. Newer models also are released by
manufacturers that provide more reliable or better performance and these newer
models are often controlled differently. Computers and their operating systems cannot
be expected to know how to control every device, both now and in the future. To
solve this problem, OSes essentially dictate how every type of device should be
controlled. The function of the device driver is then to translate these OS mandated
function calls into device specific calls. In theory a new device, which is controlled in
a new manner, should function correctly if a suitable driver is available. This new
driver will ensure that the device appears to operate as usual from the operating
systems' point of view for any person..

Mainframe computers

The earliest operating systems were developed for mainframe computer architectures
in the 1960s. The enormous investment in software for these systems caused most of
the original computer manufacturers to continue to develop hardware and operating
systems that are compatible with those early operating systems. Those early systems
pioneered many of the features of modern operating systems. Mainframe operating
systems that are still supported include:

• Burroughs MCP-- B5000,1961 to Unisys Clearpath/MCP, present.


• IBM OS/360 -- IBM System/360, 1964 to IBM zSeries, present
• UNIVAC EXEC 8 -- UNIVAC 1108, 1964, to Unisys Clearpath IX, present.

Modern mainframes typically also run Linux or Unix variants. A "Datacenter" variant
of Windows Server 2003 is also available for some mainframe systems.

Embedded systems

Embedded systems use a variety of dedicated operating systems. In some cases, the
"operating system" software is directly linked to the application to produce a
monolithic special-purpose program. In the simplest embedded systems, there is no
distinction between the OS and the application. Embedded systems that have certain
time requirements are known as Real-time operating systems.

Unix-like operating systems

A customized KDE desktop running under Linux.

The Unix-like family is a diverse group of operating systems, with several major sub-
categories including System V, BSD, and Linux.
The name "UNIX" is a trademark of The Open Group which licenses it for use
with any operating system that has been shown to conform to their definitions.
"Unix-like" is commonly used to refer to the large set of operating systems which
resemble the original Unix.

Unix systems run on a wide variety of machine architectures. They are used heavily
as server systems in business, as well as workstations in academic and engineering
environments. Free software Unix variants, such as Linux and BSD, are popular in
these areas. The market share for Linux is divided between many different
distributions. Enterprise class distributions by Red Hat or SuSe are used by
corporations, but some home users may use those products. Historically home users
typically installed a distribution themselves, but in 2007 Dell began to offer the
Ubuntu Linux distribution on home PCs. Linux on the desktop is also popular in the
developer and hobbyist operating system development communities. (see below)

Market share statistics for freely available operating systems are usually inaccurate
since most free operating systems are not purchased, making usage under-represented.
On the other hand, market share statistics based on total downloads of free operating
systems are often inflated, as there is no economic disincentive to acquire multiple
operating systems so users can download multiple, test them, and decide which they
like best,

Some Unix variants like HP's HP-UX and IBM's AIX are designed to run only on that
vendor's hardware. Others, such as Solaris, can run on multiple types of hardware,
including x86 servers and PCs. Apple's Mac OS X, a hybrid kernel-based BSD variant
derived from NeXTSTEP, Mach, and FreeBSD, has replaced Apple's earlier (non-
Unix) Mac OS.

Open source
See POSIX -- full Unix interoperability heavily depends on full POSIX
standards compliance. The POSIX standard applies to any operating system.

Over the past several years, the trend in the Unix and Unix-like space has been to
open source operating systems. Many areas previously dominated by UNIX have seen
significant inroads by Linux; Solaris source code is now the basis of the OpenSolaris
project.

The team at Bell Labs that designed and developed Unix went on to develop Plan 9
and Inferno, which were designed for modern distributed environments. They had
graphics built-in, unlike Unix counterparts that added it to the design later. Plan 9 did
not become popular because, unlike many Unix distributions, it was not originally
free. It has since been released under Free Software and Open Source Lucent Public
License, and has an expanding community of developers. Inferno was sold to Vita
Nuova and has been released under a GPL/MIT license.

Mac OS X

Mac OS X is a line of proprietary, graphical operating systems developed, marketed,


and sold by Apple Inc., the latest of which is pre-loaded on all currently shipping
Macintosh computers. Mac OS X is the successor to the original Mac OS, which had
been Apple's primary operating system since 1984. Unlike its predecessor, Mac OS X
is a UNIX operating system built on technology that had been developed at NeXT
through the second half of the 1980s and up until Apple purchased the company in
early 1997.

The operating system was first released in 1999 as Mac OS X Server 1.0, with a
desktop-oriented version (Mac OS X v10.0) following in March 2001. Since then,
four more distinct "end-user" and "server" editions of Mac OS X have been released,
the most recent being Mac OS X v10.4, which was first made available in April 2005.
Releases of Mac OS X are named after big cats; Mac OS X v10.4 is usually referred
to by Apple and users as "Tiger". In October 2007, Apple will release Mac OS X 10.5,
nicknamed "Leopard".

The server edition, Mac OS X Server, is architecturally identical to its desktop


counterpart but usually runs on Apple's line of Macintosh server hardware. Mac OS X
Server includes workgroup management and administration software tools that
provide simplified access to key network services, including a mail transfer agent, a
Samba server, an LDAP server, a domain name server, and others.

Microsoft Windows

The Microsoft Windows family of operating systems originated as a graphical layer


on top of the older MS-DOS environment for the IBM PC. Modern versions are based
on the newer Windows NT core that first took shape in OS/2 and borrowed from
VMS. Windows runs on 32-bit and 64-bit Intel and AMD processors, although earlier
versions also ran on the DEC Alpha, MIPS, Fairchild (later Intergraph) Clipper and
PowerPC architectures (some work was done to port it to the SPARC architecture).

As of July 2007, Microsoft Windows held a large amount on the worldwide desktop
market share, although some [attribution needed] predict this to dwindle due to Microsoft's
restrictive licensing, (CD-Key) registration, and customer practices causing an
increased interest in open source operating systems. Windows is also used on low-end
and mid-range servers, supporting applications such as web servers and database
servers. In recent years, Microsoft has spent significant marketing and research &
development money to demonstrate that Windows is capable of running any
enterprise application, which has resulted in consistent price/performance records (see
the TPC) and significant acceptance in the enterprise market.

The most widely used version of the Microsoft Windows family is Microsoft
Windows XP, released on October 25, 2001. The latest release of Windows XP is
Windows XP Service Pack 2, released on August 6, 2004.

In November 2006, after more than five years of development work, Microsoft
released Windows Vista, a major new version of Microsoft Windows which contains a
large number of new features and architectural changes. Chief amongst these are a
new user interface and visual style called Windows Aero, a number of new security
features such as User Account Control, and new multimedia applications such as
Windows DVD Maker.
Hobby operating system development

Operating system development, or OSDev for short, as a hobby has a large cult
following. As such, operating systems, such as Linux, have derived from hobby
operating system projects. The design and implementation of an operating system
requires skill and determination, and the term can cover anything from a basic "Hello
World" boot loader to a fully featured kernel. One classical example of this is the
Minix Operating System -- an OS that was designed as a teaching tool but was
heavily used by hobbyists before Linux eclipsed it in popularity.

Other

Mainframe operating systems, such as IBM's z/OS, and embedded operating systems
such as VxWorks, eCos, and Palm OS, are usually unrelated to Unix and Windows,
except for Windows CE, Windows NT Embedded 4.0 and Windows XP Embedded
which are descendants of Windows, and several *BSDs, and Linux distributions
tailored for embedded systems. OpenVMS from Hewlett-Packard (formerly DEC), is
still under active development.

Older operating systems which are still used in niche markets include OS/2 from
IBM; Mac OS, the non-Unix precursor to Apple's Mac OS X; BeOS; XTS-300.

Popular prior to the Dot COM era, operating systems such as AmigaOS and RISC OS
continue to be developed as minority platforms for enthusiast communities and
specialist applications.

Research and development of new operating systems continues. GNU Hurd is


designed to be backwards compatible with Unix, but with enhanced functionality and
a microkernel architecture. Singularity is a project at Microsoft Research to develop
an operating system with better memory protection based on the .Net managed code
model.2008

The history of computer operating systems recapitulates to a degree, the recent


history of computing.

Operating systems (OS) provide a set of functions needed and used by most
application-programs on a computer, and the necessary linkages for the control and
sychronization of the computer's hardware. On the first computers, without an
operating system, every program needed the full hardware specification to run
correctly and perform standard tasks, and its own drivers for peripheral devices like
printers and card-readers. The growing complexity of hardware and application-
programs eventually made operating systems a necessity.

Background

Early computers lacked any form of operating system. The user had sole use of the
machine and would arrive armed with program and data, often on punched paper and
tape. The program would be loaded into the machine, and the machine would be set to
work until the program completed or crashed. Programs could generally be debugged
via a front panel using switches and lights. It is said that Alan Turing was a master of
this on the early Manchester Mark I machine, and he was already deriving the
primitive conception of an operating system from the principles of the Universal
Turing machine.

Later machines came with libraries of support code, which would be linked to the
user's program to assist in operations such as input and output. This was the genesis of
the modern-day operating system. However, machines still ran a single job at a time;
at Cambridge University in England the job queue was at one time a washing line
from which tapes were hung with different colored clothes-pegs to indicate job-
priority.

As machines became more powerful, the time needed for a run of a program
diminished and the time to hand off the equipment became very large by comparison.
Accounting for and paying for machine usage moved on from checking the wall clock
to automatic logging by the computer. Run queues evolved from a literal queue of
people at the door, to a heap of media on a jobs-waiting table, or batches of punch-
cards stacked one on top of the other in the reader, until the machine itself was able to
select and sequence which magnetic tape drives were online. Where program
developers had originally had access to run their own jobs on the machine, they were
supplanted by dedicated machine operators who looked after the well-being and
maintenance of the machine and were less and less concerned with implementing
tasks manually. When commercially available computer centers were faced with the
implications of data lost through tampering or operational errors, equipment vendors
were put under pressure to enhance the runtime libraries to prevent misuse of system
resources. Automated monitoring was needed not just for CPU usage but for counting
pages printed, cards punched, cards read, disk storage used and for signalling when
operator intervention was required by jobs such as changing magnetic tapes.

All these features were building up towards the repertoire of a fully capable operating
system. Eventually the runtime libraries became an amalgamated program that was
started before the first customer job and could read in the customer job, control its
execution, clean up after it, record its usage, and immediately go on to process the
next job. Significantly, it became possible for programmers to use symbolic program-
code instead of having to hand-encode binary images, once task-switching allowed a
computer to perform translation of a program into binary form before running it.
These resident background programs, capable of managing multistep processes, were
often called monitors or monitor-programs before the term OS established itself.

An underlying program offering basic hardware-management, software-scheduling


and resource-monitoring may seem a remote ancestor to the user-oriented OSes of the
personal computing era. But there has been a shift in meaning. With the era of
commercial computing, more and more "secondary" software was bundled in the OS
package, leading eventually to the perception of an OS as a complete user-system
with utilities, applications (such as text editors and file managers) and configuration
tools, and having an integrated graphical user interface. The true descendant of the
early operating systems is what we now call the "kernel". In technical and
development circles the old restricted sense of an OS persists because of the
continued active development of embedded operating systems for all kinds of devices
with a data-processing component, from hand-held gadgets up to industrial robots and
real-time control-systems, which do not run user-applications at the front-end. An
embedded OS in a device today is not so far removed as one might think from its
ancestor of the 1950s.

The broader categories of systems and application software are discussed in the
computer software article.

The mainframe era

Early operating systems were very diverse, with each vendor producing one or more
operating systems specific to their particular hardware. Every operating system, even
from the same vendor, could have radically different models of commands, operating
procedures, and such facilities as debugging aids. Typically, each time the
manufacturer brought out a new machine, there would be a new operating system.
This state of affairs continued until the 1960s when IBM developed the System/360
series of machines which all used the same instruction architecture. Because there
were enormous performance differences across the range, a single operating system
could not be used and a family of operating systems were developed. See: OS/360.
(The problems encountered in the development of the OS/360 are legendary, and are
described by Fred Brooks in The Mythical Man-Month—a book that has become a
classic of software engineering).

OS/360 evolved to become successively MFT, MVT, SVS, MVS, MVS/XA,


MVS/ESA, OS/390 and z/OS, that includes the UNIX kernel as well as a huge
amount of new functions required by modern mission-critical applications running on
the zSeries mainframes. It is worth mentioning, that IBM maintained full
compatibility with the past, so that programs developed in the sixties can still run
under z/OS with no change. Although z/OS runs UNIX applications, it is a proprietary
OS.

Control Data Corporation developed the Scope operating system in the 1960s, for
batch processing. In cooperation with the University of Minnesota, the KRONOS and
later the NOS operating systems were developed during the 1970s, which supported
simultaneous batch and timesharing use. Like many commercial timesharing systems,
its interface was an extension of the Dartmouth BASIC operating systems, one of the
pioneering efforts in timesharing and programming languages. In the late 1970s,
Control Data and the University of Illinois developed the PLATO operating system,
which used plasma panel displays and long-distance time sharing networks. Plato was
remarkably innovative for its time, featuring real-time chat, and multi-user graphical
games.

UNIVAC, the first commercial computer manufacturer, produced a series of EXEC


operating systems. Like all early main-frame systems, this was a batch-oriented
system that managed magnetic drums, disks, card readers and line printers. In the
1970s, UNIVAC produced the Real-Time Basic (RTB) system to support large-scale
time sharing, also patterned after the Dartmouth BASIC system.

General Electric and MIT developed General Comprehensive Operating System (or
General Electric Comprehensive Operating System) known as GECOS and later
GCOS when General Electric's computer business was acquired by Honeywell.
GECOS introduced the concept of ringed security privilege levels.
Digital Equipment Corporation developed many operating systems for its various
computer lines, including the simple RT-11 system for its 16-bit PDP-11 class
machines, the VMS system for the 32-bit VAX computer, and TOPS-10 and TOPS-20
time sharing systems for the 36-bit PDP-10 class systems. Prior to the widespread use
of UNIX, TOPS-10 was a particularly popular system in universities, and in the early
ARPANET community.

Minicomputers and the rise of UNIX

The beginnings of the UNIX operating system was developed at AT&T Bell
Laboratories in the late 1960s. Because it was essentially free in early editions, easily
obtainable, and easily modified, it achieved wide acceptance. It also became a
requirement within the Bell systems operating companies. Since it was written in a
high level language, when that language was ported to a new machine architecture
UNIX was also able to be ported. This portability permitted it to become the choice
for a second generation of minicomputers and the first generation of workstations. By
widespread use it exemplified the idea of an operating system that was conceptually
the same across various hardware platforms. It still was owned by AT&T and that
limited its use to groups or corporations who could afford to license it.

Many early operating systems were collections of utilities to allow users to run
software on their systems. There were some companies who were able to develop
better systems, such as early Digital Equipment Corporation systems, but others never
supported features that were useful on other hardware types.

In the late 1960s through the late 1970s, several hardware capabilities evolved that
allowed similar or ported software to run on more than one system. Early systems had
utilized Microprogramming to implement features on their systems in order to permit
different underlying architecture to appear to be the same as others in a series. In fact
most 360's after the 360/40 (except the 360/165 and 360/168) were microprogrammed
implementations.

One system which evolved in this time frame was the Pick operating system. The Pick
system was developed and sold by Microdata Corporation who created the precursors
of the system . The system is an example of a system which started as a database
application support program, graduated to system work, and still exists across a wide
variety of systems supported on most UNIX systems as an addon database system.

Other packages such as Oracle are middleware and contain many of the features of
operating systems, but are in fact large applications supported on many hardware
platforms.

As hardware was packaged in ever larger amounts in small packages, first the bit slice
level of integration in systems, and then entire systems came to be present on a single
chip. This type of system in small 4 and 8 bit processors came to be known as
microprocessors. Most were not microprogrammed, but were completely integrated
general purpose processors.
The case of 8-bit home computers and game consoles
Home computers

Although most smallest 8-bit home computers of the 1980s, such as the Commodore
64, the Amstrad CPC, ZX Spectrum series and others could use a "normal" disk-
loading operating system, such as CP/M or GEOS they could generally work without
one. In fact, most if not all of these computers shipped with a built-in BASIC
interpreter on ROM, which also served as a crude operating system, allowing minimal
file management operations (such as deletion, copying, etc.) to be performed and
sometimes disk formatting, along of course with application loading and execution,
which sometimes required a non-trivial command sequence, like with the Commodore
64.

The fact that the majority of these machines were bought for entertainment and
educational purposes and were seldom used for more "serious" or business/science
oriented applications, partly explains why a "true" operating system was not
necessary.

Another reason is that they were usually single-task and single-user machines and
shipped with minimal amounts of RAM, usually between 4 and 256 kilobytes, with 64
and 128 being common figures, and 8-bit processors, so an operating system's
overhead would likely compromise the performance of the machine without really
being necessary.

Even the rare word processor and office suite applications were mostly self-contained
programs which took over the machine completely, as also did video games.

Finally, most of these machines didn't even ship with a built-in flexible disk drive,
which made using a disk-based OS impossible or a luxury option.

Game consoles and video games

Since virtually all video game consoles and arcade cabinets designed and built after
1980 were true digital machines (unlike the analog Pong clones and derivatives),
some of them carried a minimal form of BIOS or built-in game, such as the
ColecoVision, the Sega Master System and the SNK Neo Geo. There were however
successful designs where a BIOS was not necessary, such as the Nintendo NES and its
clones.

Modern day game consoles and videogames, starting with the PC-Engine, all have a
minimal BIOS that also provides some interactive utilities such as memory card
management, Audio or Video CD playback, copy prevention and sometimes carry
libraries for developers to use etc. Few of these cases, however, would qualify as a
"true" operating system.

The most notable exceptions are probably the Dreamcast game console which
includes a minimal BIOS, like the PlayStation, but can load the Windows CE
operating system from the game disk allowing easily porting of games from the PC
world, and the Xbox game console, which is little more than a disguised Intel-based
PC running a secret, modified version of Microsoft Windows in the background.

Furthermore, there are Linux versions that will run on a PlayStation or Xbox and
maybe other game consoles as well, provided they have access to a large mass storage
device and have a reasonable amount of RAM (the bare minimum for a GUI is around
512 kilobytes, as the case of the Commodore Amiga or early ATARI ST shows.
GEOS however ran on a stock C64 which came with as little as 64 kilobytes).

Long before that, Sony had released a kind of development kit called the Net Yaroze
for its first PlayStation platform, which provided a series of programming and
developing tools to be used with a normal PC and a specially modified "Black
PlayStation" that could be interfaced with a PC and download programs from it.
These operations require in general a functional OS on both platforms involved.

In general, it can be said that videogame consoles and arcade coin operated machines
used at most a built-in BIOS during the 1970s, 1980s and most of the 1990s, while
from the PlayStation era and beyond they started getting more and more sophisticated,
to the point of requiring a generic or custom-built OS for aiding in development and
expandability.

The personal computer era: Apple, PC/MS/DR-DOS and


beyond

The development of microprocessors made inexpensive computing available for the


small business and hobbyist, which in turn led to the widespread use of
interchangeable hardware components using a common interconnection (such as the
S-100, SS-50, Apple II, ISA, and PCI buses), and an increasing need for 'standard'
operating systems to control them. The most important of the early OSes on these
machines was Digital Research's CP/M-80 for the 8080 / 8085 / Z-80 CPUs. It was
based on several Digital Equipment Corporation operating systems, mostly for the
PDP-11 architecture. MS-DOS (or PC-DOS when supplied by IBM) was based
originally on CP/M-80. Each of these machines had a small boot program in ROM
which loaded the OS itself from disk. The BIOS on the IBM-PC class machines was
an extension of this idea and has accreted more features and functions in the 20 years
since the first IBM-PC was introduced in 1981.

The decreasing cost of display equipment and processors made it practical to provide
graphical user interfaces for many operating systems, such as the generic X Window
System that is provided with many UNIX systems, or other graphical systems such as
Microsoft Windows, the RadioShack Color Computer's OS-9, Commodore's
AmigaOS, Level II, Apple's Mac OS, or even IBM's OS/2. The original GUI was
developed at Xerox Palo Alto Research Center in the early '70s (the Alto computer
system) and imitated by many vendors.

Distributed systems

A distributed system consists of a collection of autonomous computers linked by a


computer network and equipped with distributed system software. This software
enables computers to coordinate their activities and to share the resources of the
system hardware, software, and data. Users of a distributed system should perceive a
single, integrated computing facility even though it may be implemented by many
computers in different locations. This is in contrast to a network, where the user is
aware that there are several machines whose locations, storage replications, load
balancing, and functionality are not transparent. Benefits of distributed systems
include bridging geographic distances, improving performance and availability,
maintaining autonomy, reducing cost, and allowing for interaction. See also Local-
area networks; Wide-area networks.

The object-oriented model for a distributed system is based on the model supported
by object-oriented programming languages. Distributed object systems generally
provide remote method invocation (RMI) in an object-oriented programming
language together with operating systems support for object sharing and persistence.
Remote procedure calls, which are used in client-server communication, are replaced
by remote method invocation in distributed object systems. See also Object-oriented
programming.

The state of an object consists of the values of its instance variables. In the object-
oriented paradigm, the state of a program is partitioned into separate parts, each of
which is associated with an object. Since object-based programs are logically
partitioned, the physical distribution of objects into different processes or computers
in a distributed system is a natural extension. The Object Management Group's
Common Object Request Broker (CORBA) is a widely used standard for distributed
object systems. Other object management systems include the Open Software
Foundation's Distributed Computing Environment (DCE) and Microsoft's Distributed
Common Object Manager (DCOM).

CORBA specifies a system that provides interoperability among objects in a


heterogeneous, distributed environment in a way that is transparent to the
programmer. Its design is based on the Object Management Group's object model.

This model defines common object semantics for specifying the externally visible
characteristics of objects in a standard and implementation-independent way. In this
model, clients request services from objects (which will also be called servers)
through a well-defined interface. This interface is specified in Object Management
Group Interface Definition Language (IDL). The request is an event, and it carries
information including an operation, the object reference of the service provider, and
actual parameters (if any). The object reference is a name that defines an object
reliably.

The central component of CORBA is the object request broker (ORB). It encompasses
the entire communication infrastructure necessary to identify and locate objects,
handle connection management, and deliver data. In general, the object request broker
is not required to be a single component; it is simply defined by its interfaces. The
core is the most crucial part of the object request broker; it is responsible for
communication of requests.

The basic functionality provided by the object request broker consists of passing the
requests from clients to the object implementations on which they are invoked. In
order to make a request, the client can communicate with the ORB core through the
Interface Definition Language stub or through the dynamic invocation interface (DII).
The stub represents the mapping between the language of implementation of the client
and the ORB core. Thus the client can be written in any language as long as the
implementation of the object request broker supports this mapping. The ORB core
then transfers the request to the object implementation which receives the request as
an up-call through either an Interface Definition Language (IDL) skeleton (which
represents the object interface at the server side and works with the client stub) or a
dynamic skeleton (a skeleton with multiple interfaces).

Many different ORB products are currently available; this diversity is very
wholesome since it allows the vendors to gear their products toward the specific needs
of their operational environment. It also creates the need for different object request
brokers to interoperate. Furthermore, there are distributed and client-server systems
that are not CORBA-compliant, and there is a growing need to provide
interoperability between those systems and CORBA. In order to answer these needs,
the Object Management Group has formulated the ORB interoperability architecture.

The interoperability approaches can be divided into mediated and immediate bridging.
With mediated bridging, interacting elements of one domain are transformed at the
boundary of each domain between the internal form specific to this domain and some
other form mutually agreed on by the domains. This common form could be either
standard (specified by the Object Management Group, for example, Internet Inter-
ORB Protocol or IIOP), or a private agreement between the two parties. With
immediate bridging, elements of interaction are transformed directly between the
internal form of one domain and the other. The second solution has the potential to be
much faster, but is the less general one; it therefore should be possible to use both.
Furthermore, if the mediation is internal to one execution environment (for example,
TCP/IP), it is known as a full bridge; otherwise, if the execution environment of one
object request broker is different from the common protocol, each object request
broker is said to be a half bridge.

system call

In computing, a system call is the mechanism used by an application program to


request service from the operating system.

Background

In addition to processing data in its own memory space, an application program might
want to use data and services provided by the system. Examples of the system
providing a service to an application include: reporting the current time, allocating
memory space, reading from or writing to a file, printing text on screen, and a host of
other necessary actions.

Since the machine and its devices are shared between all the programs, the access
must be synchronized. Some of the activities may fail the system or even destroy
something physically. For these reasons, the access to the physical environment is
strictly managed by the BIOS and operating system. The code and data of the OS is
located in a protected area of memory and cannot be accessed/damaged by user
applications. The only gate to the hardware is system calls, which are defined in the
operating system. These calls check the requests and deliver them to the OS drivers,
which control the hardware input/output directly.

Modern processors can typically execute instructions in several, very different,


privileged states. In systems with two levels, they are usually called user mode and
supervisor mode. Different privilege levels are provided so that operating systems can
restrict the operations that programs running under them can perform, for reasons of
security and stability. Such operations include accessing hardware devices, enabling
and disabling interrupts, changing privileged processor state, and accessing memory
management units.

With the development of separate operating modes with varying levels of privilege, a
mechanism was needed for transferring control safely from lesser privileged modes to
higher privileged modes. Less privileged code could not simply transfer control to
more privileged code at any arbitrary point and with any arbitrary processor state. To
allow it to do so could allow it to break security. For instance, the less privileged code
could cause the higher privileged code to execute in the wrong order, or provide it
with a bad stack.

Mechanism

System calls often use a special CPU instruction which causes the processor to
transfer control to more privileged code, as previously specified by the more
privileged code. This allows the more privileged code to specify where it will be
entered as well as important processor state at the time of entry.

When the system call is invoked, the program which invoked it is interrupted, and
information needed to continue its execution later is saved. The processor then begins
executing the higher privileged code, which, by examining processor state set by the
less privileged code and/or its stack, determines what is being requested. When it is
finished, it returns to the program, restoring the saved state, and the program
continues executing.

Note that in many cases, the actual return to the program may not be immediate. If the
system call performs any kind of lengthy I/O operation, for instance disk or network
access, the program may be suspended (“blocked”) and taken off the “ready” queue
until the operation is complete, at which point the operating system will again make it
a candidate for execution.

The library as an intermediary

Generally, operating systems provide a library that sits between normal programs and
the rest of the operating system, usually the C library (libc), such as glibc and the
Microsoft C runtime. This library handles the low-level details of passing information
to the kernel and switching to supervisor mode, as well as any data processing and
preparation which does not need to be done in privileged mode. Ideally, this reduces
the coupling between the operating system and the application, and increases
portability.
On exokernel based systems, the library is especially important as an intermediary. On
exokernels, OSes shield user applications from the very low level kernel API, and
provide abstractions and resource management.

You might also like