You are on page 1of 10

BASIC FUNCTIONS OF AN OPERATING SYSTEM

Definition

An operating system (OS) is system software that manages computer hardware and
software resources and provides common services for computer programs.

An operating system is a group of computer programs that coordinates all the activities among
computer hardware devices. It is the first program loaded into the computer by a boot program
and remains in memory at all times. Application programs usually require an operating
system to function.

Functions of an operating system

The basic functions of an operating system are:

i. Booting the computer


ii. Managing various peripheral devices eg mouse, keyboard i.e input/output devices
management
iii. Provides a user interface e.g. command line (CLI), graphical user interface (GUI)
iv. Memory management
v. Processor Management
vi. Provides file management which refers to the way that the operating system manipulates,
stores, retrieves and saves data.

Booting the computer

The process of starting or restarting the computer is known as booting. A cold boot is when you
turn on a computer that has been turned off completely. A warm boot is the process of using the
operating system to restart the computer.

Performs basic computer tasks

The operating system performs basic computer tasks, such as managing the various peripheral
devices such as the mouse, keyboard and printers. For example, most operating systems now are
plug and play which means a device such as a printer will automatically be detected and
configured without any user intervention.

Provides a user interface

A user interacts with software through the user interface. The two main types of user interfaces
are: command line and a graphical user interface (GUI). With a command line interface, the user
interacts with the operating system by typing commands to perform specific tasks. An example
of a command line interface is DOS (disk operating system). With a graphical user interface, the
user interacts with the operating system by using a mouse to access windows, icons, and menus.
1
An example of a graphical user interface is Windows Vista or Windows 7.
The operating system is responsible for providing a consistent application program interface
(API) which is important as it allows a software developer to write an application on one
computer and know that it will run on another computer of the same type even if the amount of
memory or amount of storage is different on the two machines.

Handles system resources

The operating system also handles system resources such as the computer's memory and sharing
of the central processing unit (CPU) time by various applications or peripheral devices. Programs
and input methods are constantly competing for the attention of the CPU and demand memory,
storage and input/output bandwidth. The operating system ensures that each application gets the
necessary resources it needs in order to maximise the functionality of the overall system.

Provides file management

The operating system also handles the organisation and tracking of files and directories (folders)
saved or retrieved from a computer disk. The file management system allows the user to perform
such tasks as creating files and directories, renaming files, coping and moving files, and deleting
files. The operating system keeps track of where files are located on the hard drive through the
type of file system. The type two main types of file system are File Allocation table (FAT) or
New Technology File system (NTFS).

Types of file system

 File Allocation table (FAT)


 New Technology file system (NTFS)

File Allocation table (FAT) uses the file allocation table which records, which clusters are used
and unused and where files are located within the clusters.

NTFS is a file system introduced by Microsoft and it has a number of advantages over the
previous file system, named FAT32 (File Allocation Table).

One major advantage of NTFS is that it includes features to improve reliablity. For example, the
new technology file system includes fault tolerance, which automatically repairs hard drive
errors without displaying error messages. It also keeps detailed transaction logs, which tracks
hard drive errors. This can help prevent hard disk failures and makes it possible to recover files if
the hard drive does fail.

NTFS also allows permissions (such as read, write, and execute) to be set for individual
directories and files.

2
LINUX

What is Linux?
Linux is, in simplest terms, an operating system. It is the software on a computer that enables
applications and the computer operator to access the devices on the computer to perform desired
functions. The operating system (OS) relays instructions from an application to, for instance, the
computer's processor. The processor performs the instructed task, then sends the results back to
the application via the operating system.
Explained in these terms, Linux is very similar to other operating systems, such as Windows and
OS X.
But something sets Linux apart from these operating systems. The Linux operating system
represented a $25 billion ecosystem in 2008. Since its inception in 1991, Linux has grown to
become a force in computing, powering everything from the New York Stock Exchange to
mobile phones to supercomputers to consumer devices.
As an open operating system, Linux is developed collaboratively, meaning no one company is
solely responsible for its development or ongoing support. Companies participating in the Linux
economy share research and development costs with their partners and competitors. This
spreading of development burden amongst individuals and companies has resulted in a large and
efficient ecosystem and unheralded software innovation.
Over 1,000 developers, from at least 100 different companies, contribute to every kernel release.
In the past two years alone, over 3,200 developers from 200 companies have contributed to the
kernel--which is just one small piece of a Linux distribution.
This article will explore the various components of the Linux operating system, how they are
created and work together, the communities of Linux, and Linux's incredible impact on the IT
ecosystem.
Where is Linux?
One of the most noted properties of Linux is where it can be used. Windows and OS X are
predominantly found on personal computing devices such as desktop and laptop computers.
Other operating systems, such as Symbian, are found on small devices such as phones and PDAs,
while mainframes and supercomputers found in major academic and corporate labs use
specialized operating systems such as AS/400 and the Cray OS.
Linux, which began its existence as a server OS and Has become useful as a desktop OS, can
also be used on all of these devices. “From wristwatches to supercomputers,” is the popular
description of Linux' capabilities.
An abbreviated list of some of the popular electronic devices Linux is used on today includes:

Garmin Nuvi 860, 880, and


Dell Inspiron Mini 9 and 12 5000 Google Android Dev Phone 1

3
Lenovo IdeaPad S9
HP Mini 1000
Motorola MotoRokr EM35
Phone

One Laptop Per Child XO2 Sony Bravia Television


Sony Reader

Volvo In-Car Navigation


TiVo Digital Video System Yamaha Motif Keyboard
Recorder
These are just the most recent examples of Linux-based devices available to consumers
worldwide. This actual number of items that use Linux numbers in the thousands. The Linux
Foundation is building a centralized database that will list all currently offered Linux-based
products, as well as archive those devices that pioneered Linux-based electronics.

HISTORY OF LINUX CHRONOLOGY


 1983: Richard Stallman creates the GNU project with the goal of creating a free operating
system.
 1989: Richard Stallman writes the first version of the GNU General Public License.
 1991: The Linux kernel is publicly announced on 25 August by the 21 year old Finnish
student Linus Benedict Torvalds
 1992: The Linux kernel is relicensed under the GNU GPL. The first so called “Linux
distributions” are created.
 1993: Over 100 developers work on the Linux kernel. With their assistance the kernel is
adapted to the GNU environment, which creates a large spectrum of application types for
Linux. The oldest currently existing Linux distribution, Slackware, is released for the first
time. Later in the same year, the Debian project is established. Today it is the largest
community distribution.
4
 1994: In March Torvalds judges all components of the kernel to be fully matured: he releases
version 1.0 of Linux. The XFree86 project contributes a graphic user interface (GUI). In this
year the companiesRed Hat and SUSE publish version 1.0 of their Linux distributions.
 1995: Linux is ported to the DEC Alpha and to the Sun SPARC. Over the following years it
is ported to an ever greater number of platforms.
 1996: Version 2.0 of the Linux kernel is released. The kernel can now serve several
processors at the same time, and thereby becomes a serious alternative for many companies.
 1998: Many major companies such as IBM, Compaq and Oracle announce their support for
Linux. In addition a group of programmers begins developing the graphic user
interface KDE.
 1999: A group of developers begin work on the graphic environment GNOME, which should
become a free replacement for KDE, which depended on the then proprietary Qt toolkit.
During the year IBM announces an extensive project for the support of Linux.
 2004: The XFree86 team splits up and joins with the existing X Window standards body to
form the X.Org Foundation, which results in a substantially faster development of the X
Window Server for Linux.
 2005: The project openSUSE begins a free distribution from Novell's community. Also the
project OpenOffice.org introduces version 2.0 that now supports OASIS OpenDocument
standards in October.
 2006: Oracle releases its own distribution of Red Hat. Novell and Microsoft announce
cooperation for a better interoperability.
 2007: Dell starts distributing laptops with Ubuntu pre-installed in them.
 2011: Version 3.0 of the Linux kernel is released.
 2012: The aggregate Linux server market revenue exceeds that of the rest of the Unix
market.
 2013: Google's Linux-based Android claims 75% of the smartphone market share, in terms
of the number of phones shipped.
 2014: Ubuntu claims 22,000,000 users.
 2015: Version 4.0 of the Linux kernel is released.

5
GNU/LINUX OPERATING SYSTEM ARCHITECTURE

Linux as an operating system is referred to in some cases as "Linux" and in others as


"GNU/Linux." The reason behind this is that Linux is the kernel of an operating system. The
wide range of applications that make the operating system useful are the GNU software. For
example, the windowing system, compiler, variety of shells, development tools, editors, utilities,
and other applications exist outside of the kernel, many of which are GNU software. For this
reason, many consider "GNU/Linux" a more appropriate name for the operating system, while
"Linux" is appropriate when referring to just the kernel.

The fundamental architecture of the GNU/Linux operating system

At the top is the user, or application, space. This is where the user applications are executed.
Below the user space is the kernel space. Here, the Linux kernel exists.
There is also the GNU C Library (glibc). This provides the system call interface that connects
to the kernel and provides the mechanism to transition between the user-space application and
the kernel. This is important because the kernel and user application occupy different protected
address spaces. And while each user-space process occupies its own virtual address space, the
kernel occupies a single address space.

The Linux kernel can be further divided into three gross levels. At the top is the system call
interface, which implements the basic functions such as read and write. Below the system call
interface is the kernel code, which can be more accurately defined as the architecture-
independent kernel code. This code is common to all of the processor architectures supported by
Linux. Below this is the architecture-dependent code, which forms what is more commonly
called a BSP (Board Support Package). This code serves as the processor and platform-specific
code for the given architecture.

In reality, the architecture is not as clean as what is shown in Figure above. For example, the
mechanism by which system calls are handled (transitioning from the user space to the kernel
space) can differ by architecture. Newer x86 central processing units (CPUs) that provide
support for virtualization instructions are more efficient in this process than older x86 processors
that use the traditional int 80h method.
6
The Linux kernel implements a number of important architectural attributes. At a high level, and
at lower levels, the kernel is layered into a number of distinct subsystems. Linux can also be
considered monolithic because it lumps all of the basic services into the kernel. This differs from
a microkernel architecture where the kernel provides basic services such as communication, I/O,
and memory and process management, and more specific services are plugged in to the
microkernel layer. Each has its own advantages, but I'll steer clear of that debate.

Over time, the Linux kernel has become efficient in terms of both memory and CPU usage, as
well as extremely stable. But the most interesting aspect of Linux, given its size and complexity,
is its portability. Linux can be compiled to run on a huge number of processors and platforms
with different architectural constraints and needs. One example is the ability for Linux to run on
a process with a memory management unit (MMU), as well as those that provide no MMU. The
uClinux port of the Linux kernel provides for non-MMU support. See the Resources section for
more details.

MAJOR SUBSYSTEMS OF THE LINUX KERNEL


Now let's look at some of the major components of the Linux kernel using the breakdown shown
in Figure 3 as a guide.

Figure 3. One architectural perspective of the Linux kernel

7
System call interface Subsystem
The SCI is a thin layer that provides the means to perform function calls from user space into the
kernel. As discussed previously, this interface can be architecture dependent, even within the
same processor family. The SCI is actually an interesting function-call multiplexing and
demultiplexing service. You can find the SCI implementation in ./linux/kernel, as well as
architecture-dependent portions in ./linux/arch.

Process management Subsystem


A kernel is a resource manager that does manage: a process, memory, or hardware device, the
kernel manages and arbitrate access to the resource between multiple competing users (both in
the kernel and in user space).
Process management is focused on the execution of processes called threads. Threads represent
an individual virtualization of the processor (thread code, data, stack, and CPU registers). In user
space, the term process is typically used, though the Linux implementation does not separate the
two concepts (processes and threads). The kernel provides an application program interface
(API) through the SCI to create a new process (fork, exec, or Portable Operating System
Interface [POSIX] functions), stop a process (kill, exit), and communicate and synchronize
between them (signal, or POSIX mechanisms).
Also process management share the CPU between the active threads. The kernel implements a
novel scheduling algorithm that operates in constant time, regardless of the number of threads
vying for the CPU. This is called the O(1) scheduler, denoting that the same amount of time is
taken to schedule one thread as it is to schedule many. The O(1) scheduler also supports multiple
processors (called Symmetric MultiProcessing, or SMP). You can find the process management
sources in ./linux/kernel and architecture-dependent sources in ./linux/arch).

Memory management Subsystem


Another important resource that's managed by the kernel is memory. For efficiency, given the
way that the hardware manages virtual memory, memory is managed in what are
called pages (4KB in size for most architectures). Linux includes the means to manage the
available memory, as well as the hardware mechanisms for physical and virtual mappings.
But memory management is much more than managing 4KB buffers. Linux provides abstraction
over 4KB buffers, such as the slab allocator. This memory management scheme uses 4KB
buffers as its base, but then allocates structures from within, keeping track of which pages are
full, partially used, and empty. This allows the scheme to dynamically grow and shrink based on
the needs of the greater system.
Supporting multiple users of memory, there are times when the available memory can be
exhausted. For this reason, pages can be moved out of memory and onto the disk. This process is
called swapping because the pages are swapped from memory onto the hard disk. You can find
the memory management sources in ./linux/mm.

Virtual file system Subsystem


The virtual file system (VFS) is an interesting aspect of the Linux kernel because it provides a
common interface abstraction for file systems. The VFS provides a switching layer between the
SCI and the file systems supported by the kernel (see Figure 4).

8
Figure 4. The VFS provides a switching fabric between users and file systems

At the top of the VFS is a common API abstraction of functions such as open, close, read, and
write. At the bottom of the VFS are the file system abstractions that define how the upper-layer
functions are implemented. These are plug-ins for the given file system (of which over 50 exist).
You can find the file system sources in ./linux/fs.
Below the file system layer is the buffer cache, which provides a common set of functions to the
file system layer (independent of any particular file system). This caching layer optimizes access
to the physical devices by keeping data around for a short time (or speculatively read ahead so
that the data is available when needed). Below the buffer cache are the device drivers, which
implement the interface for the particular physical device.

Network stack Subsystem


The network stack, by design, follows a layered architecture modeled after the protocols
themselves. Recall that the Internet Protocol (IP) is the core network layer protocol that sits
below the transport protocol (most commonly the Transmission Control Protocol, or TCP).
Above TCP is the sockets layer, which is invoked through the SCI.
The sockets layer is the standard API to the networking subsystem and provides a user interface
to a variety of networking protocols. From raw frame access to IP protocol data units (PDUs)
and up to TCP and the User Datagram Protocol (UDP), the sockets layer provides a standardized
way to manage connections and move data between endpoints. You can find the networking
sources in the kernel at ./linux/net.

Device drivers
The vast majority of the source code in the Linux kernel exists in device drivers that make a
particular hardware device usable. The Linux source tree provides a drivers subdirectory that is
further divided by the various devices that are supported, such as Bluetooth, I2C, serial, and so
on. You can find the device driver sources in ./linux/drivers.

Architecture-dependent code
While much of Linux is independent of the architecture on which it runs, there are elements that
must consider the architecture for normal operation and for efficiency. The ./linux/arch

9
subdirectory defines the architecture-dependent portion of the kernel source contained in a
number of subdirectories that are specific to the architecture (collectively forming the BSP). For
a typical desktop, the i386 directory is used. Each architecture subdirectory contains a number of
other subdirectories that focus on a particular aspect of the kernel, such as boot, kernel, memory
management, and others. You can find the architecture-dependent code in ./linux/arch.

Interesting features of the Linux kernel


 Open source
 Portable
 Efficient
 supports a large number of networking protocols, including the typical TCP/IP
 Linux is also a dynamic kernel, supporting the addition and removal of software
components on the fly
 Linux is its use as an operating system for other operating systems (called a hypervisor)
 Linux Kernel-based Virtual Machine (KVM).

If the portability and efficiency of the Linux kernel weren't enough, it provides some other
features that could not be classified in the previous decomposition.
Linux, being a production operating system and open source, is a great test bed for new protocols
and advancements of those protocols. Linux supports a large number of networking protocols,
including the typical TCP/IP, and also extension for high-speed networking (greater than 1
Gigabit Ethernet [GbE] and 10 GbE). Linux also supports protocols such as the Stream Control
Transmission Protocol (SCTP), which provides many advanced features above TCP (as a
replacement transport level protocol).
Linux is also a dynamic kernel, supporting the addition and removal of software components on
the fly. These are called dynamically loadable kernel modules, and they can be inserted at boot
when they're needed (when a particular device is found requiring the module) or at any time by
the user.
A recent advancement of Linux is its use as an operating system for other operating systems
(called a hypervisor). Recently, a modification to the kernel was made called the Kernel-based
Virtual Machine (KVM). This modification enabled a new interface to user space that allows
other operating systems to run above the KVM-enabled kernel. In addition to running another
instance of Linux, Microsoft® Windows® can also be virtualized. The only constraint is that the
underlying processor must support the new virtualization instructions.

10

You might also like