Professional Documents
Culture Documents
The computer's master control program. When the computer is turned on, a small
"boot program" loads the operating system. Although additional modules may be
loaded as needed, the main part, known as the "kernel" resides in memory at all times.
The operating system (OS) sets the standards for all application programs that run in
the computer. Applications "talk to" the operating system for all user interface and file
management operations. Also called an "executive" or "supervisor," an operating
system performs the following functions.
User Interface
All graphics based today, the user interface includes the windows, menus and method
of interaction between you and the computer. Prior to graphical user interfaces
(GUIs), all operation of the computer was performed by typing in commands. Not at
all extinct, command-line interfaces are alive and well and provide an alternate way of
running programs on all major operating systems.
Operating systems may support optional interfaces, both graphical and command line.
Although the overwhelming majority of people work with the default interfaces,
different "shells" offer variations of appearance and functionality.
Job Management
Job management controls the order and time in which programs are run and is more
sophisticated in the mainframe environment where scheduling the daily work has
always been routine. IBM's job control language (JCL) was developed decades ago.
In a desktop environment, batch files can be written to perform a sequence of
operations that can be scheduled to start at a given time.
Task Management
Data Management
Data management keeps track of the data on disk, tape and optical storage devices.
The application program deals with data by file name and a particular location within
the file. The operating system's file system knows where that data are physically
stored (which sectors on disk) and interaction between the application and operating
system is through the programming interface. Whenever an application needs to read
or write data, it makes a call to the operating system (see API).
Device Management
Security
Operating systems provide password protection to keep unauthorized users out of the
system. Some operating systems also maintain activity logs and accounting of the
user's time for billing purposes. They also provide backup and recovery routines for
starting over in the event of a system failure.
Process management
Memory management
The operating system can also write inactive memory pages to secondary storage.
This process is called "paging" or "swapping" – the terminology varies between
operating systems.
It is also typical for operating systems to employ otherwise unused physical memory
as a page cache; requests for data from a slower device can be retained in memory to
improve performance. The operating system can also pre-load the in-memory cache
with data that may be requested by the user in the near future; SuperFetch is an
example of this.
Unix demarcates its path components with a slash (/), a convention followed by
operating systems that emulated it or at least its concept of hierarchical directories,
such as Linux, Amiga OS and Mac OS X. MS-DOS also emulated this feature, but
had already also adopted the CP/M convention of using slashes for additional options
to commands, so instead used the backslash (\) as its component separator. Microsoft
Windows continues with this convention; Japanese editions of Windows use ¥, and
Korean editions use ₩[citation needed]. Versions of Mac OS prior to OS X use a colon (:)
for a path separator. RISC OS uses a period (.).
Unix and Unix-like operating systems allow for any character in file names other than
the slash (including line feed (LF) and other control characters). Unix file names are
case sensitive, which allows multiple files to be created with names that differ only in
case. By contrast, Microsoft Windows file names are not case sensitive by default.
Windows also has a larger set of punctuation characters that are not allowed in file
names.
File systems may provide journaling, which provides safe recovery in the event of a
system crash. A journaled file system writes information twice: first to the journal,
which is a log of file system operations, then to its proper place in the ordinary file
system. In the event of a crash, the system can recover to a consistent state by
replaying a portion of the journal. In contrast, non-journaled file systems typically
need to be examined in their entirety by a utility such as fsck or chkdsk. Soft updates
is an alternative to journalling that avoids the redundant writes by carefully ordering
the update operations. Log-structured file systems and ZFS also differ from traditional
journaled file systems in that they avoid inconsistencies by always writing new copies
of the data, eschewing in-place updates.
Many Linux distributions support some or all of ext2, ext3, ReiserFS, Reiser4, GFS,
GFS2, OCFS, OCFS2, and NILFS. Linux also has full support for XFS and JFS,
along with the FAT file systems, and NTFS.
Microsoft Windows includes support for FAT12, FAT16, FAT32, and NTFS. The
NTFS file system is the most efficient and reliable of the four Windows file systems,
and as of Windows Vista, is the only file system which the operating system can be
installed on. Windows Embedded CE 6.0 introduced ExFAT, a file system suitable for
flash drives.
Mac OS X supports HFS+ as its primary file system, and it supports several other file
systems as well, including FAT16, FAT32, NTFS and ZFS.
Common to all these (and other) operating systems is support for file systems
typically found on removable media. FAT12 is the file system most commonly found
on floppy discs. ISO 9660 and Universal Disk Format are two common formats that
target Compact Discs and DVDs, respectively. Mount Rainier is a newer extension to
UDF supported by Linux 2.6 kernels and Windows-Vista that facilitates rewriting to
DVDs in the same fashion as what has been possible with floppy disks.
Networking
Most current operating systems are capable of using the TCP/IP networking protocols.
This means that computers running dissimilar operating systems can participate in a
common network for sharing resources such as computing, files, printers, and
scanners using either wired or wireless connections.
Many operating systems also support one or more vendor-specific legacy networking
protocols as well, for example, SNA on IBM systems, DECnet on systems from
Digital Equipment Corporation, and Microsoft-specific protocols on Windows.
Specific protocols for specific tasks may also be supported such as NFS for file
access.
Security
Many operating systems include some level of security. Security is based on the two
ideas that:
Security of operating systems has long been a concern because of highly sensitive
data held on computers, both of a commercial and military nature. The United States
Government Department of Defense (DoD) created the Trusted Computer System
Evaluation Criteria (TCSEC) which is a standard that sets basic requirements for
assessing the effectiveness of security. This became of vital importance to operating
system makers, because the TCSEC was used to evaluate, classify and select
computer systems being considered for the processing, storage and retrieval of
sensitive or classified information.
Internal security
Internal security can be thought of as protecting the computer's resources from the
programs concurrently running on the system. Most operating systems set programs
running natively on the computer's processor, so the problem arises of how to stop
these programs doing the same task and having the same privileges as the operating
system (which is after all just a program too). Processors used for general purpose
operating systems generally have a hardware concept of privilege. Generally less
privileged programs are automatically blocked from using certain hardware
instructions, such as those to read or write from external devices like disks. Instead,
they have to ask the privileged program (operating system kernel) to read or write.
The operating system therefore gets the chance to check the program's identity and
allow or refuse the request.
An alternative strategy, and the only sandbox strategy available in systems that do not
meet the Popek and Goldberg virtualization requirements, is the operating system not
running user programs as native code, but instead either emulates a processor or
provides a host for a p-code based system such as Java.
Internal security is especially relevant for multi-user systems; it allows each user of
the system to have private files that the other users cannot tamper with or read.
Internal security is also vital if auditing is to be of any use, since a program can
potentially bypass the operating system, inclusive of bypassing auditing.
External security
Typically an operating system offers (or hosts) various services to other network
computers and users. These services are usually provided through ports or numbered
access points beyond the operating systems network address. Services include
offerings such as file sharing, print services, email, web sites, and file transfer
protocols (FTP), most of which can have compromised security.
At the front line of security are hardware devices known as firewalls or intrustion
detection/prevention systems. At the operating system level, there are a number of
software firewalls available, as well as intrusion detection/prevention systems. Most
modern operating systems include a software firewall, which is enabled by default. A
software firewall can be configured to allow or deny network traffic to or from a
service or application running on the operating system. Therefore, one can install and
be running an insecure service, such as Telnet or FTP, and not have to be threatened
by a security breach because the firewall would deny all traffic trying to connect to
the service on that port.
Today, most modern operating systems contain Graphical User Interfaces . A few
older operating systems tightly integrated the GUI to the kernel—for example, in the
original implementations of Microsoft Windows and Mac OS the Graphical
subsystem was actually part of the operating system. More modern operating systems
are modular, separating the graphics subsystem from the kernel (as is now done in
Linux, and Mac OS X) so that the graphics subsystem is not part of the OS at all.
Many operating systems allow the user to install or create any user interface they
desire. The X Window System in conjunction with GNOME or KDE is a commonly
found setup on most Unix and Unix derivative (BSD, Linux, Minix) systems.
Graphical user interfaces evolve over time. For example, Windows has modified its
user interface almost every time a new major version of Windows is released, and the
Mac OS GUI changed dramatically with the introduction of Mac OS X in 2001.
Device drivers
Mainframe computers
The earliest operating systems were developed for mainframe computer architectures
in the 1960s. The enormous investment in software for these systems caused most of
the original computer manufacturers to continue to develop hardware and operating
systems that are compatible with those early operating systems. Those early systems
pioneered many of the features of modern operating systems. Mainframe operating
systems that are still supported include:
Modern mainframes typically also run Linux or Unix variants. A "Datacenter" variant
of Windows Server 2003 is also available for some mainframe systems.
Embedded systems
Embedded systems use a variety of dedicated operating systems. In some cases, the
"operating system" software is directly linked to the application to produce a
monolithic special-purpose program. In the simplest embedded systems, there is no
distinction between the OS and the application. Embedded systems that have certain
time requirements are known as Real-time operating systems.
The Unix-like family is a diverse group of operating systems, with several major sub-
categories including System V, BSD, and Linux.
The name "UNIX" is a trademark of The Open Group which licenses it for use
with any operating system that has been shown to conform to their definitions.
"Unix-like" is commonly used to refer to the large set of operating systems which
resemble the original Unix.
Unix systems run on a wide variety of machine architectures. They are used heavily
as server systems in business, as well as workstations in academic and engineering
environments. Free software Unix variants, such as Linux and BSD, are popular in
these areas. The market share for Linux is divided between many different
distributions. Enterprise class distributions by Red Hat or SuSe are used by
corporations, but some home users may use those products. Historically home users
typically installed a distribution themselves, but in 2007 Dell began to offer the
Ubuntu Linux distribution on home PCs. Linux on the desktop is also popular in the
developer and hobbyist operating system development communities. (see below)
Market share statistics for freely available operating systems are usually inaccurate
since most free operating systems are not purchased, making usage under-represented.
On the other hand, market share statistics based on total downloads of free operating
systems are often inflated, as there is no economic disincentive to acquire multiple
operating systems so users can download multiple, test them, and decide which they
like best,
Some Unix variants like HP's HP-UX and IBM's AIX are designed to run only on that
vendor's hardware. Others, such as Solaris, can run on multiple types of hardware,
including x86 servers and PCs. Apple's Mac OS X, a hybrid kernel-based BSD variant
derived from NeXTSTEP, Mach, and FreeBSD, has replaced Apple's earlier (non-
Unix) Mac OS.
Open source
See POSIX -- full Unix interoperability heavily depends on full POSIX
standards compliance. The POSIX standard applies to any operating system.
Over the past several years, the trend in the Unix and Unix-like space has been to
open source operating systems. Many areas previously dominated by UNIX have seen
significant inroads by Linux; Solaris source code is now the basis of the OpenSolaris
project.
The team at Bell Labs that designed and developed Unix went on to develop Plan 9
and Inferno, which were designed for modern distributed environments. They had
graphics built-in, unlike Unix counterparts that added it to the design later. Plan 9 did
not become popular because, unlike many Unix distributions, it was not originally
free. It has since been released under Free Software and Open Source Lucent Public
License, and has an expanding community of developers. Inferno was sold to Vita
Nuova and has been released under a GPL/MIT license.
Mac OS X
The operating system was first released in 1999 as Mac OS X Server 1.0, with a
desktop-oriented version (Mac OS X v10.0) following in March 2001. Since then,
four more distinct "end-user" and "server" editions of Mac OS X have been released,
the most recent being Mac OS X v10.4, which was first made available in April 2005.
Releases of Mac OS X are named after big cats; Mac OS X v10.4 is usually referred
to by Apple and users as "Tiger". In October 2007, Apple will release Mac OS X 10.5,
nicknamed "Leopard".
Microsoft Windows
As of July 2007, Microsoft Windows held a large amount on the worldwide desktop
market share, although some [attribution needed] predict this to dwindle due to Microsoft's
restrictive licensing, (CD-Key) registration, and customer practices causing an
increased interest in open source operating systems. Windows is also used on low-end
and mid-range servers, supporting applications such as web servers and database
servers. In recent years, Microsoft has spent significant marketing and research &
development money to demonstrate that Windows is capable of running any
enterprise application, which has resulted in consistent price/performance records (see
the TPC) and significant acceptance in the enterprise market.
The most widely used version of the Microsoft Windows family is Microsoft
Windows XP, released on October 25, 2001. The latest release of Windows XP is
Windows XP Service Pack 2, released on August 6, 2004.
In November 2006, after more than five years of development work, Microsoft
released Windows Vista, a major new version of Microsoft Windows which contains a
large number of new features and architectural changes. Chief amongst these are a
new user interface and visual style called Windows Aero, a number of new security
features such as User Account Control, and new multimedia applications such as
Windows DVD Maker.
Hobby operating system development
Operating system development, or OSDev for short, as a hobby has a large cult
following. As such, operating systems, such as Linux, have derived from hobby
operating system projects. The design and implementation of an operating system
requires skill and determination, and the term can cover anything from a basic "Hello
World" boot loader to a fully featured kernel. One classical example of this is the
Minix Operating System -- an OS that was designed as a teaching tool but was
heavily used by hobbyists before Linux eclipsed it in popularity.
Other
Mainframe operating systems, such as IBM's z/OS, and embedded operating systems
such as VxWorks, eCos, and Palm OS, are usually unrelated to Unix and Windows,
except for Windows CE, Windows NT Embedded 4.0 and Windows XP Embedded
which are descendants of Windows, and several *BSDs, and Linux distributions
tailored for embedded systems. OpenVMS from Hewlett-Packard (formerly DEC), is
still under active development.
Older operating systems which are still used in niche markets include OS/2 from
IBM; Mac OS, the non-Unix precursor to Apple's Mac OS X; BeOS; XTS-300.
Popular prior to the Dot COM era, operating systems such as AmigaOS and RISC OS
continue to be developed as minority platforms for enthusiast communities and
specialist applications.
Operating systems (OS) provide a set of functions needed and used by most
application-programs on a computer, and the necessary linkages for the control and
sychronization of the computer's hardware. On the first computers, without an
operating system, every program needed the full hardware specification to run
correctly and perform standard tasks, and its own drivers for peripheral devices like
printers and card-readers. The growing complexity of hardware and application-
programs eventually made operating systems a necessity.
Background
Early computers lacked any form of operating system. The user had sole use of the
machine and would arrive armed with program and data, often on punched paper and
tape. The program would be loaded into the machine, and the machine would be set to
work until the program completed or crashed. Programs could generally be debugged
via a front panel using switches and lights. It is said that Alan Turing was a master of
this on the early Manchester Mark I machine, and he was already deriving the
primitive conception of an operating system from the principles of the Universal
Turing machine.
Later machines came with libraries of support code, which would be linked to the
user's program to assist in operations such as input and output. This was the genesis of
the modern-day operating system. However, machines still ran a single job at a time;
at Cambridge University in England the job queue was at one time a washing line
from which tapes were hung with different colored clothes-pegs to indicate job-
priority.
As machines became more powerful, the time needed for a run of a program
diminished and the time to hand off the equipment became very large by comparison.
Accounting for and paying for machine usage moved on from checking the wall clock
to automatic logging by the computer. Run queues evolved from a literal queue of
people at the door, to a heap of media on a jobs-waiting table, or batches of punch-
cards stacked one on top of the other in the reader, until the machine itself was able to
select and sequence which magnetic tape drives were online. Where program
developers had originally had access to run their own jobs on the machine, they were
supplanted by dedicated machine operators who looked after the well-being and
maintenance of the machine and were less and less concerned with implementing
tasks manually. When commercially available computer centers were faced with the
implications of data lost through tampering or operational errors, equipment vendors
were put under pressure to enhance the runtime libraries to prevent misuse of system
resources. Automated monitoring was needed not just for CPU usage but for counting
pages printed, cards punched, cards read, disk storage used and for signalling when
operator intervention was required by jobs such as changing magnetic tapes.
All these features were building up towards the repertoire of a fully capable operating
system. Eventually the runtime libraries became an amalgamated program that was
started before the first customer job and could read in the customer job, control its
execution, clean up after it, record its usage, and immediately go on to process the
next job. Significantly, it became possible for programmers to use symbolic program-
code instead of having to hand-encode binary images, once task-switching allowed a
computer to perform translation of a program into binary form before running it.
These resident background programs, capable of managing multistep processes, were
often called monitors or monitor-programs before the term OS established itself.
The broader categories of systems and application software are discussed in the
computer software article.
Early operating systems were very diverse, with each vendor producing one or more
operating systems specific to their particular hardware. Every operating system, even
from the same vendor, could have radically different models of commands, operating
procedures, and such facilities as debugging aids. Typically, each time the
manufacturer brought out a new machine, there would be a new operating system.
This state of affairs continued until the 1960s when IBM developed the System/360
series of machines which all used the same instruction architecture. Because there
were enormous performance differences across the range, a single operating system
could not be used and a family of operating systems were developed. See: OS/360.
(The problems encountered in the development of the OS/360 are legendary, and are
described by Fred Brooks in The Mythical Man-Month—a book that has become a
classic of software engineering).
Control Data Corporation developed the Scope operating system in the 1960s, for
batch processing. In cooperation with the University of Minnesota, the KRONOS and
later the NOS operating systems were developed during the 1970s, which supported
simultaneous batch and timesharing use. Like many commercial timesharing systems,
its interface was an extension of the Dartmouth BASIC operating systems, one of the
pioneering efforts in timesharing and programming languages. In the late 1970s,
Control Data and the University of Illinois developed the PLATO operating system,
which used plasma panel displays and long-distance time sharing networks. Plato was
remarkably innovative for its time, featuring real-time chat, and multi-user graphical
games.
General Electric and MIT developed General Comprehensive Operating System (or
General Electric Comprehensive Operating System) known as GECOS and later
GCOS when General Electric's computer business was acquired by Honeywell.
GECOS introduced the concept of ringed security privilege levels.
Digital Equipment Corporation developed many operating systems for its various
computer lines, including the simple RT-11 system for its 16-bit PDP-11 class
machines, the VMS system for the 32-bit VAX computer, and TOPS-10 and TOPS-20
time sharing systems for the 36-bit PDP-10 class systems. Prior to the widespread use
of UNIX, TOPS-10 was a particularly popular system in universities, and in the early
ARPANET community.
The beginnings of the UNIX operating system was developed at AT&T Bell
Laboratories in the late 1960s. Because it was essentially free in early editions, easily
obtainable, and easily modified, it achieved wide acceptance. It also became a
requirement within the Bell systems operating companies. Since it was written in a
high level language, when that language was ported to a new machine architecture
UNIX was also able to be ported. This portability permitted it to become the choice
for a second generation of minicomputers and the first generation of workstations. By
widespread use it exemplified the idea of an operating system that was conceptually
the same across various hardware platforms. It still was owned by AT&T and that
limited its use to groups or corporations who could afford to license it.
Many early operating systems were collections of utilities to allow users to run
software on their systems. There were some companies who were able to develop
better systems, such as early Digital Equipment Corporation systems, but others never
supported features that were useful on other hardware types.
In the late 1960s through the late 1970s, several hardware capabilities evolved that
allowed similar or ported software to run on more than one system. Early systems had
utilized Microprogramming to implement features on their systems in order to permit
different underlying architecture to appear to be the same as others in a series. In fact
most 360's after the 360/40 (except the 360/165 and 360/168) were microprogrammed
implementations.
One system which evolved in this time frame was the Pick operating system. The Pick
system was developed and sold by Microdata Corporation who created the precursors
of the system . The system is an example of a system which started as a database
application support program, graduated to system work, and still exists across a wide
variety of systems supported on most UNIX systems as an addon database system.
Other packages such as Oracle are middleware and contain many of the features of
operating systems, but are in fact large applications supported on many hardware
platforms.
As hardware was packaged in ever larger amounts in small packages, first the bit slice
level of integration in systems, and then entire systems came to be present on a single
chip. This type of system in small 4 and 8 bit processors came to be known as
microprocessors. Most were not microprogrammed, but were completely integrated
general purpose processors.
The case of 8-bit home computers and game consoles
Home computers
Although most smallest 8-bit home computers of the 1980s, such as the Commodore
64, the Amstrad CPC, ZX Spectrum series and others could use a "normal" disk-
loading operating system, such as CP/M or GEOS they could generally work without
one. In fact, most if not all of these computers shipped with a built-in BASIC
interpreter on ROM, which also served as a crude operating system, allowing minimal
file management operations (such as deletion, copying, etc.) to be performed and
sometimes disk formatting, along of course with application loading and execution,
which sometimes required a non-trivial command sequence, like with the Commodore
64.
The fact that the majority of these machines were bought for entertainment and
educational purposes and were seldom used for more "serious" or business/science
oriented applications, partly explains why a "true" operating system was not
necessary.
Another reason is that they were usually single-task and single-user machines and
shipped with minimal amounts of RAM, usually between 4 and 256 kilobytes, with 64
and 128 being common figures, and 8-bit processors, so an operating system's
overhead would likely compromise the performance of the machine without really
being necessary.
Even the rare word processor and office suite applications were mostly self-contained
programs which took over the machine completely, as also did video games.
Finally, most of these machines didn't even ship with a built-in flexible disk drive,
which made using a disk-based OS impossible or a luxury option.
Since virtually all video game consoles and arcade cabinets designed and built after
1980 were true digital machines (unlike the analog Pong clones and derivatives),
some of them carried a minimal form of BIOS or built-in game, such as the
ColecoVision, the Sega Master System and the SNK Neo Geo. There were however
successful designs where a BIOS was not necessary, such as the Nintendo NES and its
clones.
Modern day game consoles and videogames, starting with the PC-Engine, all have a
minimal BIOS that also provides some interactive utilities such as memory card
management, Audio or Video CD playback, copy prevention and sometimes carry
libraries for developers to use etc. Few of these cases, however, would qualify as a
"true" operating system.
The most notable exceptions are probably the Dreamcast game console which
includes a minimal BIOS, like the PlayStation, but can load the Windows CE
operating system from the game disk allowing easily porting of games from the PC
world, and the Xbox game console, which is little more than a disguised Intel-based
PC running a secret, modified version of Microsoft Windows in the background.
Furthermore, there are Linux versions that will run on a PlayStation or Xbox and
maybe other game consoles as well, provided they have access to a large mass storage
device and have a reasonable amount of RAM (the bare minimum for a GUI is around
512 kilobytes, as the case of the Commodore Amiga or early ATARI ST shows.
GEOS however ran on a stock C64 which came with as little as 64 kilobytes).
Long before that, Sony had released a kind of development kit called the Net Yaroze
for its first PlayStation platform, which provided a series of programming and
developing tools to be used with a normal PC and a specially modified "Black
PlayStation" that could be interfaced with a PC and download programs from it.
These operations require in general a functional OS on both platforms involved.
In general, it can be said that videogame consoles and arcade coin operated machines
used at most a built-in BIOS during the 1970s, 1980s and most of the 1990s, while
from the PlayStation era and beyond they started getting more and more sophisticated,
to the point of requiring a generic or custom-built OS for aiding in development and
expandability.
The decreasing cost of display equipment and processors made it practical to provide
graphical user interfaces for many operating systems, such as the generic X Window
System that is provided with many UNIX systems, or other graphical systems such as
Microsoft Windows, the RadioShack Color Computer's OS-9, Commodore's
AmigaOS, Level II, Apple's Mac OS, or even IBM's OS/2. The original GUI was
developed at Xerox Palo Alto Research Center in the early '70s (the Alto computer
system) and imitated by many vendors.
Distributed systems
The object-oriented model for a distributed system is based on the model supported
by object-oriented programming languages. Distributed object systems generally
provide remote method invocation (RMI) in an object-oriented programming
language together with operating systems support for object sharing and persistence.
Remote procedure calls, which are used in client-server communication, are replaced
by remote method invocation in distributed object systems. See also Object-oriented
programming.
The state of an object consists of the values of its instance variables. In the object-
oriented paradigm, the state of a program is partitioned into separate parts, each of
which is associated with an object. Since object-based programs are logically
partitioned, the physical distribution of objects into different processes or computers
in a distributed system is a natural extension. The Object Management Group's
Common Object Request Broker (CORBA) is a widely used standard for distributed
object systems. Other object management systems include the Open Software
Foundation's Distributed Computing Environment (DCE) and Microsoft's Distributed
Common Object Manager (DCOM).
This model defines common object semantics for specifying the externally visible
characteristics of objects in a standard and implementation-independent way. In this
model, clients request services from objects (which will also be called servers)
through a well-defined interface. This interface is specified in Object Management
Group Interface Definition Language (IDL). The request is an event, and it carries
information including an operation, the object reference of the service provider, and
actual parameters (if any). The object reference is a name that defines an object
reliably.
The central component of CORBA is the object request broker (ORB). It encompasses
the entire communication infrastructure necessary to identify and locate objects,
handle connection management, and deliver data. In general, the object request broker
is not required to be a single component; it is simply defined by its interfaces. The
core is the most crucial part of the object request broker; it is responsible for
communication of requests.
The basic functionality provided by the object request broker consists of passing the
requests from clients to the object implementations on which they are invoked. In
order to make a request, the client can communicate with the ORB core through the
Interface Definition Language stub or through the dynamic invocation interface (DII).
The stub represents the mapping between the language of implementation of the client
and the ORB core. Thus the client can be written in any language as long as the
implementation of the object request broker supports this mapping. The ORB core
then transfers the request to the object implementation which receives the request as
an up-call through either an Interface Definition Language (IDL) skeleton (which
represents the object interface at the server side and works with the client stub) or a
dynamic skeleton (a skeleton with multiple interfaces).
Many different ORB products are currently available; this diversity is very
wholesome since it allows the vendors to gear their products toward the specific needs
of their operational environment. It also creates the need for different object request
brokers to interoperate. Furthermore, there are distributed and client-server systems
that are not CORBA-compliant, and there is a growing need to provide
interoperability between those systems and CORBA. In order to answer these needs,
the Object Management Group has formulated the ORB interoperability architecture.
The interoperability approaches can be divided into mediated and immediate bridging.
With mediated bridging, interacting elements of one domain are transformed at the
boundary of each domain between the internal form specific to this domain and some
other form mutually agreed on by the domains. This common form could be either
standard (specified by the Object Management Group, for example, Internet Inter-
ORB Protocol or IIOP), or a private agreement between the two parties. With
immediate bridging, elements of interaction are transformed directly between the
internal form of one domain and the other. The second solution has the potential to be
much faster, but is the less general one; it therefore should be possible to use both.
Furthermore, if the mediation is internal to one execution environment (for example,
TCP/IP), it is known as a full bridge; otherwise, if the execution environment of one
object request broker is different from the common protocol, each object request
broker is said to be a half bridge.
system call
Background
In addition to processing data in its own memory space, an application program might
want to use data and services provided by the system. Examples of the system
providing a service to an application include: reporting the current time, allocating
memory space, reading from or writing to a file, printing text on screen, and a host of
other necessary actions.
Since the machine and its devices are shared between all the programs, the access
must be synchronized. Some of the activities may fail the system or even destroy
something physically. For these reasons, the access to the physical environment is
strictly managed by the BIOS and operating system. The code and data of the OS is
located in a protected area of memory and cannot be accessed/damaged by user
applications. The only gate to the hardware is system calls, which are defined in the
operating system. These calls check the requests and deliver them to the OS drivers,
which control the hardware input/output directly.
With the development of separate operating modes with varying levels of privilege, a
mechanism was needed for transferring control safely from lesser privileged modes to
higher privileged modes. Less privileged code could not simply transfer control to
more privileged code at any arbitrary point and with any arbitrary processor state. To
allow it to do so could allow it to break security. For instance, the less privileged code
could cause the higher privileged code to execute in the wrong order, or provide it
with a bad stack.
Mechanism
System calls often use a special CPU instruction which causes the processor to
transfer control to more privileged code, as previously specified by the more
privileged code. This allows the more privileged code to specify where it will be
entered as well as important processor state at the time of entry.
When the system call is invoked, the program which invoked it is interrupted, and
information needed to continue its execution later is saved. The processor then begins
executing the higher privileged code, which, by examining processor state set by the
less privileged code and/or its stack, determines what is being requested. When it is
finished, it returns to the program, restoring the saved state, and the program
continues executing.
Note that in many cases, the actual return to the program may not be immediate. If the
system call performs any kind of lengthy I/O operation, for instance disk or network
access, the program may be suspended (“blocked”) and taken off the “ready” queue
until the operation is complete, at which point the operating system will again make it
a candidate for execution.
Generally, operating systems provide a library that sits between normal programs and
the rest of the operating system, usually the C library (libc), such as glibc and the
Microsoft C runtime. This library handles the low-level details of passing information
to the kernel and switching to supervisor mode, as well as any data processing and
preparation which does not need to be done in privileged mode. Ideally, this reduces
the coupling between the operating system and the application, and increases
portability.
On exokernel based systems, the library is especially important as an intermediary. On
exokernels, OSes shield user applications from the very low level kernel API, and
provide abstractions and resource management.