You are on page 1of 22

1.

INTRODUCTION

In computing, a process is an instance of a computer program, consisting of one or more threads, that is
being sequentially executed by a computer system that has the ability to run several computer programs
concurrently. Process migration on the other hand is the act of transferring a process between two
machines. It enables dynamic load distribution, fault resilience, eased system administration, and data
access locality, while thread is a single sequence stream within in a process but because threads have some
of the properties of processes, they are sometimes called lightweight processes. In the cause of this term
paper, we shall study in detail what each of the above given definitions are and how the inter-relate.
We shall also study the management of processes, representation and inter-process communication.
While we examine process migration, we shall take a look at the goals of process migration and the process
migration algorithm. Under threading we will look at the advantages and uses of threading, types and reasons
for threading. Since thread is a single sequence stream within a process and have some similarities with
processes, we shall compare threads with processes. Examine their differences, scrutinize their similarities and
stress the advantages and disadvantages of threads over multi-process.
In wrapping up to this term paper we shall enumerate the implementation of threads and draw over
conclusion based on our research.

2.0

PROCESS

In computing, a process is an instance of a computer program, consisting of one or more threads, that
is being sequentially executed by a computer system that has the ability to run several computer
programs concurrently. A computer program itself is just a passive collection of instructions, while a
process is the actual execution of those instructions. Several processes may be associated with the same
program; for example, opening up several instances of the same program often means more than one
process is being executed. In the computing world, processes are formally defined by the operating
system (OS) running them and so may differ in detail from one OS to another. A single computer
processor executes one or more (multiple) instructions at a time (per clock cycle), one after the other
(this is a simplification; for the full story, see superscalar CPU architecture). To allow users to run
several programs at once (e.g., so that processor time is not wasted waiting for input from a resource),
single-processor computer systems can perform time-sharing. Time-sharing allows processes to switch
between being executed and waiting (to continue) to be executed. In most cases this is done very rapidly,
providing the illusion that several processes are executing 'at once'. This is known as concurrency or
multiprogramming.

2.0.1 HISTORY OF PROCESS


By the early 60s computer control software had evolved from Monitor control software, e.g., IBSYS, to
Executive control software. Computers got "faster" and computer time was still neither "cheap" nor fully
used. It made multiprogramming possible and necessary.
Multiprogramming means that several programs run "at the same time" (concurrently). At first they ran
on a single processor (i.e., uniprocessor) and shared scarce resources. Multiprogramming is also basic
form of multiprocessing, a much broader term.
Programs consist of sequence of instruction for processor. Single processor can run only one instruction
at a time. Therefore it is impossible to run more programs at the same time. Program might need some
resource (input ...) which has "big" delay. Program might start some slow operation (output to
printer ...). This all leads to processor being "idle" (unused). To use processor at all time the execution of
such program was halted. At that point, a second (or nth) program was started or restarted. User
perceived that programs run "at the same time" (hence the term, concurrent).
Shortly thereafter, the notion of a 'program' was expanded to the notion of an 'executing program and its
context'. The concept of a process was born.
This became necessary with the invention of re-entrant code.
Threads came somewhat later. However, with the advent of time-sharing; computer networks;
multiple-CPU, shared memory computers; etc., the old "multiprogramming" gave way to true
multitasking, multiprocessing and, later, multithreading.

2.1 Process representation


In general, a computer system process consists of (or is said to 'own') the following resources:
An image of the executable machine code associated with a program.
Memory (typically some region of virtual memory); which includes the executable code, process-specific
data (input and output), a call stack (to keep track of active subroutines and/or other events), and a heap to
hold intermediate computation data generated during run time.
Operating system descriptors of resources that are allocated to the process, such as file descriptors (Unix
terminology) or handles (Windows), and data sources and sinks.
Security attributes, such as the process owner and the process' set of permissions (allowable operations).
Processor state (context), such as the content of registers, physical memory addressing, etc. The state is
typically stored in computer registers when the process is executing, and in memory otherwise.The operating
system holds most of this information about active processes in data structures called process control blocks
(PCB).Any subset of resources, but typically at least the processor state, may be associated with each of the
process' threads in operating systems that support threads or 'daughter' processes.
The operating system keeps its processes separated and allocates the resources they need so that they are
less likely to interfere with each other and cause system failures (e.g., deadlock or thrashing).

2.2 Process state


During the lifespan of a process, its execution status may be in one of four states: (associated with
each state is usually a queue on which the process resides)
Executing: the process is currently running and has control of a CPU
Waiting: the process is currently able to run, but must wait until a CPU becomes available
Blocked: the process is currently waiting on I/O, either for input to arrive or output to be sent
Suspended: the process is currently able to run, but for some reason the OS has not placed the
process on the ready queue
Ready: the process is in memory, will execute given CPU time.
The diagram below shows a process going through various states.

2.3

Process management

Multiprogramming systems explicitly allow multiple processes to exist at any given time, where
only one is using the CPU at any given moment, while the remaining processes are performing I/O or
are waiting.
The process manager is of the four major parts of the operating system. It implements the process
abstraction. It does this by creating a model for the way the process uses CPU and any system resources.
Much of the complexity of the operating system stems from the need for multiple processes to share the
hardware at the same time. As a consequence of this goal, the process manager implements CPU sharing
(called scheduling ), process synchronization mechanisms, and a deadlock strategy. In addition,
the process manager implements part of the operating system's protection and security

2.4

Inter-process communication

When processes communicate with each other it is called "Inter-process communication" (IPC).
Processes frequently need to communicate, for instance in a shell pipeline, the output of the first process
need to pass to the second one, and so on to the other process. It is preferred in a well-structured way
not using interrupts.
It is even possible for the two processes to be running on different machines. The operating system (OS)
may differ from one process to the other, therefore some mediator(s) (called protocols) are needed.

2.5

Process Control Block

If the OS supports multiprogramming, then it needs to keep track of all the processes. For each process, its
process control block PCB is used to track the process's execution status, including the following:
Its current processor registers contents
Its processor state (if it is blocked or ready)
Its memory state
A pointer to its stack
Which resources have been allocated to it
Which resources it needs

3.0 Process Migration


Process migration is the act of transferring a process between two machines. It enables dynamic
load distribution, fault resilience, eased system administration, and data access locality. Despite these
goals and ongoing research efforts, migration has not achieved widespread use. With the increasing
deployment of distributed systems in general, and distributed operating systems in particular, process
migration is again receiving more attention in both research and product development. As highperformance facilities shift from supercomputers to networks of workstations, and with the everincreasing role of the World Wide Web, we expect migration to play a more important role and
eventually to be widely adopted.
Process migration has been used to perform specialized tasks, such as load sharing and
checkpoint/restarting long running applications. Implementation typically consists of modifications to
existing applications and the creation of specialized support systems, which limit the applicability of the
methodology. Off the shelf applications have not benefited from process migration technologies, mainly
due to the lack of an effective generalized methodology and facility. The benefits of process migration
include mobility checkpointing, relocation, scheduling and on the fly maintenance. This paper shows
how regular shrink-wrapped applications can be migrated. The approach to migration is to virtualize the
application by injecting functionality into running applications and operating systems. Using this
scheme, we separate the physical resource bindings of the application and replace it with virtual
bindings. This technique is referred to as virtualization. We have developed a virtualizing Operating
System (vOS), residing on top of Windows 2000 that injects stock applications with the virtualizing
software. It coordinates activities across multiple platforms providing new functionality to the existing
applications. The vOS makes it possible to build communities of systems that cooperate to run
applications and share resources non-intrusively while retaining application binary compatibility.

3.0.1 Process Checkpointing


Some SSI systems allow checkpointing of running processes, allowing their current state to be saved and
reloaded at a later date.[note 2] Checkpointing can be seen as related to migration, as migrating a
process from one node to another can be implemented by first checkpointing the process, then restarting
it on another node. Alternatively checkpointing can be considered as migration to disk.

3.1

Goals of Process Migration

The goals of process migration are closely tied with the type of applications that use migration,
as described in next section. The goals of process migration include:
Accessing more processing power is a goal of migration when it is used for load
distribution. Migration is particularly important in the receiver-initiated distributed
scheduling algorithms,where a lightly loaded node announces its availability and initiates
process migrationfrom an overloaded node. This was the goal of many systems described
in this survey, such as Locus [Walkeret al., 1983], MOSIX [Barak and Shiloh, 1985], and
Mach [Milojicic et al., 1993a]. Load

distribution also depends on load information

management and distributed scheduling (see Sections 2.7 and 2.8). A variation of this
goal is harnessing the computing power of temporarily free workstations in large clusters.
In this case, process migration is used to evict processes upon the owners return, such as
in the case of Sprite .
Exploitation of resource locality is a goal of migration in cases when it is more efficient to
access resources locally than remotely. Moving a process to another end of a
communication channel transforms remote communication to local and thereby
significantly improves performance. It is also possible that the resource is not remotely
accessible, as in the case when there are different semantics for local and remote
accesses. Examples include work by Jul [1989], Milojicic et al. [1993], and Miller and
Presotto [1981].
Resource sharing is enabled by migration to a specific node with a special hardware
device, large amounts of free memory, or some other unique resource. Examples include
NOW[Anderson et al., 1995] for utilizing memory of remote nodes, and the use of
parallel make in Sprite [Douglis and Ousterhout, 1991] and work by Skordos [1995] for
utilizing unused workstations.
Fault resilience is improved by migration from a partially failed node, or in the case of
long-running applications when failures of different kinds (network, devices) are
probable [Chu et al., 1980]. In this context, migration can be used in combination with
checkpointing, such as in Condor [Litzkow and Solomon, 1992] or Utopia [Zhou et al.,
1994]. Large-scale systems where there is a likelihood that some of the systems can fail
can also benefit from migration, such as in Hive [Chapin95] and OSF/1 AD TNC
[Zajc93].

System administration is simplified if long-running computations can be temporarily


transferred to other machines. For example, an application could migrate from a node that
will be shutdown, and then migrate back after the node is brought back up. Another
example is the repartitioning of large machines, such as in the OSF/1 AD TNC Paragon
configuration [Zajcew et al., 1993].
Mobile computing also increases the demand for migration.Users may want to migrate
running applications from a host to their mobile computer as they connect to a network at
their current location or back again when they disconnect [Bharat and Cardelli, 1995].

3.2

Process Migration Algorithm


Although there are many different migration implementations and designs, most of them can be

summarized in the following steps:


1. A migration request is issued to a remote node.
After negotiation, migration has been accepted.
2. A process is detached from its source node by suspending its execution, declaring it to be in
a migrating state, and temporarily redirecting communication as described in the following step.
3. Communication is temporarily redirected by queuing up arriving messages directed to the
migrated process,
and by delivering them to the process after migration. This step continues in parallel with steps 4,
5, and 6, as long as there are additional incoming messages. Once the communication channels
are enabled after migration (as a result of step 7), the migrated process is known to the external
world.
4. The process state is extracted, including memory contents; processor state (register
contents); communication state (e.g., opened files and message channels);and relevant kernel
context. The communication state and kernel context are OSdependent. Some of the local OS
internal state is not transferable. The process state is typically retained on the source node until
the end of migration, and in some systems it remains there even after migration
completes. Processor dependencies, such as register and stack contents, have to be eliminated in
the case of heterogeneous migration.
5. A destination process instance is created into which the transferred state will be imported. A
destination instance is not activated until a sufficient amount of state has been transferred from
the source process instance. After that, the destination instance will be promoted into a regular
process.

6. State is transferred and imported into a new instance on the remote node. Not all of the
state needs to be transferred; some of the state could be lazily brought over after migration is
completed.
7. Some means of forwarding references to the migrated process must be maintained. This is
required in order to communicate with the process or to control it. It can be achieved by
registering the current location at the home node (e.g. in Sprite), by searching for the migrated
process (e.g. in the V Kernel, at the communication protocol level), or by forwarding messages
across all visited nodes (e.g. in Charlotte). This step also enables migrated communication
channels at the destination and it ends step 3 as communication is permanently redirected.
8. The new instance is resumed when sufficient state has been transferred and imported. With
this step, process migration completes. Once all of the state has been transferred from the
original instance, it may be deleted on the source node.

4.0 Threading
A thread of execution results from a fork of a computer program into two or more concurrently
running tasks. The implementation of threads and processes differs from one operating system to
another, but in most cases, a thread is contained inside a process. Multiple threads can exist within the
same process and share resources such as memory, while different processes do not share these
resources.

On a single processor, multithreading generally occurs by time-division multiplexing (as in


multitasking): the processor switches between different threads. This context switching generally
happens frequently enough that the user perceives the threads or tasks as running at the same time. On a
multiprocessor or multi-core system, the threads or tasks will generally run at the same time, with each
processor or core running a particular thread or task. Support for threads in programming languages
varies. A number of languages support multiple threads but do not allow them to execute at the same
time. Examples of such languages include Python and OCaml, because the parallel support of their
runtime environment is based on a central lock, called the "Global Interpreter Lock" in Python and the
"master lock" in Ocaml. Other languages may be limited because they use threads that are user threads,
which are not visible to the kernel, and thus cannot be scheduled to run concurrently. On the other hand,
kernel threads, which are visible to the kernel, can run concurrently.
Many modern operating systems directly support both time-sliced and multiprocessor threading with a
process scheduler. The kernel of an operating system allows programmers to manipulate threads via the
system call interface. Some implementations are called a kernel thread, whereas a lightweight process
(LWP) is a specific type of kernel thread that shares the same state and information. Programs can have
user-space threads when threading with timers, signals, or other methods to interrupt their own
execution, performing a sort of ad-hoc time-slicing.

4.1

Multithreading

Multithreading as a widespread programming and execution model allows multiple threads to


exist within the context of a single process. These threads share the process' resources but are
able to execute independently. The threaded programming model provides developers with a
useful abstraction of concurrent execution. However, perhaps the most interesting application of
the technology is when it is applied to a single process to enable parallel execution on a
multiprocessor system.
This advantage of a multithreaded program allows it to operate faster on computer systems that
have multiple CPUs, CPUs with multiple cores or across a cluster of machines because the
threads of the program naturally lend themselves to truly concurrent execution. In such a case,
the programmer needs to be careful to avoid race conditions, and other non-intuitive behaviors.
In order for data to be correctly manipulated, threads will often need to rendezvous in time in
order to process the data in the correct order. Threads may also require atomic operations (often
implemented using semaphores) in order to prevent common data from being simultaneously
modified, or read while in the process of being modified. Careless use of such primitives can
lead to deadlocks.
Another advantage of multithreading, even for single-CPU systems, is the ability for an
application to remain responsive to input. In a single threaded program, if the main executions
thread blocks on a long running task, the entire application can appear to freeze. By moving such
long running tasks to a worker thread that runs concurrently with the main execution thread, it is
possible for the application to remain responsive to user input while executing tasks in the
background.
Operating systems schedule threads in one of two ways:
Preemptive multithreading is generally considered the superior approach, as it allows the
operating system to determine when a context switch should occur. The disadvantage to
preemptive multithreading is that the system may make a context switch at an
inappropriate time, causing priority inversion or other negative effects which may be
avoided by cooperative multithreading.
Cooperative multithreading, on the other hand, relies on the threads themselves to
relinquish control once they are at a stopping point. This can create problems if a thread
is waiting for a resource to become available.

4.1.0 Advantages & Disadvantages of Multithreading


Advantages
If a thread gets a lot of cache misses, the other thread(s) can continue, taking advantage of
the unused computing resources, which thus can lead to faster overall execution, as these
resources would have been idle if only a single thread was executed.
If a thread can not use all the computing resources of the CPU (because instructions depend
on each other's result), running another thread permits to not leave these idle.
If several threads work on the same set of data, they can actually share their cache, leading
to better cache usage or synchronization on its values.
Disadvantages
Multiple threads can interfere with each other when sharing hardware resources such as
caches or translation lookaside buffers (TLBs).
Execution times of a single-thread are not improved but can be degraded, even when only
one thread is executing. This is due to slower frequencies and/or additional pipeline
stages that are necessary to accommodate thread-switching hardware.
Hardware support for Multithreading is more visible to software, thus requiring more
changes to both application programs and operating systems than Multiprocessing.

4.1.1 Types of Multithreading


Kernel thread: this is the "lightest" unit of kernel scheduling. At least one kernel thread
exists within each process. If multiple kernel threads can exist within a process, then they share
the same memory and file resources. Kernel threads are preemptively multitasked if the
operating system's process scheduler is preemptive. Kernel threads do not own resources except
for a stack, a copy of the registers including the program counter, and thread-local storage (if
any).
Advantages of Kernel Thread
The most obvious advantage of this technique is that a user-level threads package can be
implemented on an Operating System that does not support threads. Some other advantages are
User-level threads does not require modification to operating systems.
Simple Representation:

Each thread is represented simply by a PC, registers, stack and a small control block, all stored
in the user process address space.

Simple Management:
This simply means that creating a thread, switching between threads and synchronization
between threads can all be done without intervention of the kernel.
Fast and Efficient:
Thread switching is not much more expensive than a procedure call.
Disadvantages of Kernel Thread
There is a lack of coordination between threads and operating system kernel. Therefore,
process as whole gets one time slice irrespective of whether process has one thread or
1000 threads within. It is up to each thread to relinquish control to other threads.
User-level threads requires non-blocking systems call i.e., a multithreaded kernel.
Otherwise, entire process will blocked in the kernel, even if there are run able threads left
in the processes. For example, if one thread causes a page fault, the process blocks.

User thread: Threads are sometimes implemented in userspace libraries, thus called user
threads. The kernel is not aware of them, they are managed and scheduled in userspace. Some
implementations base their user threads on top of several kernel threads to benefit from multiprocessor machines (N:M model). In this article the term "thread" (without kernel or user
qualifier) defaults to referring to kernel threads. User threads as implemented by virtual
machines are also called green threads. User threads are generally fast to create and manage.
Advantages of User threads
Because kernel has full knowledge of all threads, Scheduler may decide to give more time
to a process having large number of threads than process having small number of threads.
Kernel-level threads are especially good for applications that frequently block.
Disadvantages of User threads
The kernel-level threads are slow and inefficient. For instance, threads operations are
hundreds of times slower than that of user-level threads.
Since kernel must manage and schedule threads as well as processes. It require a full thread
control block (TCB) for each thread to maintain information about threads. As a result
there is significant overhead and increased in kernel complexity.

Fibers: are an even lighter unit of scheduling which are cooperatively scheduled: a running fiber
must explicitly "yield" to allow another fiber to run, which makes their implementation much
easier than kernel or user threads. A fiber can be scheduled to run in any thread in the same
process. This permits applications to gain performance improvements by managing scheduling
themselves, instead of relying on the kernel scheduler (which may not be tuned for the
application). Parallel programming environments such as OpenMP typically implement their
tasks through fibers.

4.2

Concurrency & Data structures

Threads in the same process share the same address space. This allows concurrently-running
code to couple tightly and conveniently exchange data without the overhead or complexity of an
IPC. When shared between threads, however, even simple data structures become prone to race
hazards if they require more than one CPU instruction to update: two threads may end up
attempting to update the data structure at the same time and find it unexpectedly changing
underfoot. Bugs caused by race hazards can be very difficult to reproduce and isolate.
To prevent this, threading APIs offer synchronization primitives such as mutexes to lock data
structures against concurrent access. On uniprocessor systems, a thread running into a locked
mutex must sleep and hence trigger a context switch. On multi-processor systems, the thread
may instead poll the mutex in a spinlock. Both of these may sap performance and force
processors in SMP systems to contend for the memory bus, especially if the granularity of the
locking is fine.

4.3

I/O & Scheduling

User thread or fiber implementations are typically entirely in userspace. As a result, context
switching between user threads or fibers within the same process is extremely efficient because it
does not require any interaction with the kernel at all: a context switch can be performed by
locally saving the CPU registers used by the currently executing user thread or fiber and then
loading the registers required by the user thread or fiber to be executed. Since scheduling occurs
in userspace, the scheduling policy can be more easily tailored to the requirements of the
program's workload.
However, the use of blocking system calls in user threads or fibers can be problematic. If a user
thread or a fiber performs a system call that blocks, the other user threads and fibers in the
process are unable to run until the system call returns. A typical example of this problem is when
performing I/O: most programs are written to perform I/O synchronously. When an I/O operation
is initiated, a system call is made, and does not return until the I/O operation has been completed.

In the intervening period, the entire process is "blocked" by the kernel and cannot run, which
starves other user threads and fibers in the same process from executing.
A common solution to this problem is providing an I/O API that implements a synchronous
interface by using non-blocking I/O internally, and scheduling another user thread or fiber while
the I/O operation is in progress. Similar solutions can be provided for other blocking system
calls. Alternatively, the program can be written to avoid the use of synchronous I/O or other
blocking system calls.

4.4

Reasons For Threads

Following are some reasons why we use threads in designing operating systems.
A process with multiple threads makes a great server for example printer server.
Because threads can share common data, they do not need to use interprocess
communication.
Because of the very nature, threads can take advantage of multiprocessors.
Threads are cheap in the sense that:
They only need a stack and storage for registers therefore, threads are cheap to create.
Threads use very little resources of an operating system in which they are working. That is,
threads do not need new address space, global data, program code or operating system
resources.
Context switching are fast when working with threads. The reason is that we only have to
save and/or restore PC, SP and registers.
But this cheapness does not come free - the biggest drawback is that there is no protection
between threads.

5.0 Threads compared With processes

Because thread is a single sequence stream within a process and have some similarities with
processes, we shall here examine the similarities and also see how they differ.

5.1 Similarities between Threads & processes


Like processes threads share CPU and only one thread active (running) at a time.
Like processes, threads within a processes, threads within a processes execute sequentially.
Like processes, thread can create children.
And like process, if one thread is blocked, another thread can run.

5.2 Differences between Threads & processes


Unlike processes, threads are not independent of one another.
Unlike processes, all threads can access every address in the task.
Unlike processes, thread are design to assist one other. Note that processes might or might
not assist one another because processes may originate from different users.
Processes carry considerable state information, whereas multiple threads within a process
share state as well as memory and other resources
Processes have separate address spaces, whereas threads share their address space
Processes interact only through system-provided inter-process communication mechanisms.
Context switching between threads in the same process is typically faster than context
switching between processes.

5.3 Advantages of Threads over Multiple processes


Context Switching: Threads are very inexpensive to create and destroy, and they are
inexpensive to represent. For example, they require space to store, the PC, the SP, and the
general-purpose registers, but they do not require space to share memory information,
Information about open files of I/O devices in use, etc. With so little context, it is much
faster to switch between threads. In other words, it is relatively easier for a context switch
using threads.

Sharing: Treads allow the sharing of a lot resources that cannot be shared in process, for
example, sharing code section, data section, Operating System resources like open file
etc.

5.4 Disadvantages of Threads over Multiple processes


Blocking: The major disadvantage if that if the kernel is single threaded, a system call of
one thread will block the whole process and CPU may be idle during the blocking period.
Security: Since there is, an extensive sharing among threads there is a potential problem of
security. It is quite possible that one thread over writes the stack of another thread (or
damaged shared data) although it is very unlikely since threads are meant to cooperate on
a single task.

6.0 Implementation of Threads


A major area of research is the thread scheduler which must quickly choose among the list of
ready-to-run threads to execute next as well as maintain the ready-to-run and stalled thread lists.
An important sub-topic is the different thread priority schemes that can be used by the scheduler.
The thread scheduler might be implemented totally in software or totally in hardware or as a
hw/sw combination.
Another area of research is what type of events should cause a thread switch - cache misses, interthread communication, DMA completion, etc.
If the multithreading scheme replicates all software visible state, include privileged control
registers, TLBs, etc., then it enables virtual machines to be created for each thread. This allows
each thread to run its own operating system on the same processor. On the other hand, if only usermode state is saved, less hardware is required which would allow for more threads to be active at
one time for the same die-area/cost.

6.1

Types of Threading and Implementation Examples

There are many different and incompatible implementations of threading. These include both
kernel-level and user-level implementations. They however often follow more or less closely the
POSIX Threads interface.
Kernel-level implementation examples
Light Weight Kernel Threads in various BSDs
M:N threading

Native POSIX Thread Library for Linux, an implementation of the POSIX Threads (pthreads)
standard
Apple Multiprocessing Services version 2.0 and later, uses the built-in nanokernel in Mac OS
8.6 and later which was modified to support it.
User-level implementation examples
GNU Portable Threads
FSU Pthreads
Apple Inc.'s Thread Manager
REALbasic (includes an API for cooperative threading)
Netscape Portable Runtime (includes a user-space fibers implementation)
Hybrid implementation examples
Scheduler activations used by the NetBSD native POSIX threads library implementation (an
N:M model as opposed to a 1:1 kernel or userspace implementation model)
Marcel from the PM2 project.
The OS for the Tera/Cray MTA
Microsoft Windows 7
Fiber implementation examples
Fibers can be implemented without operating system support, although some operating systems or
libraries provide explicit support for them.
Win32 supplies a fiber API[1] (Windows NT 3.51 SP3 and later)
Ruby

REFERENCE

David R. Butenhof: Programming with POSIX Threads, Addison-Wesley, ISBN 0-20163392-2

Bradford Nichols, Dick Buttlar, Jacqueline Proulx Farell: Pthreads Programming,


O'Reilly & Associates, ISBN 1-56592-115-1

Charles J. Northrup: Programming with UNIX Threads, John Wiley & Sons, ISBN 0471-13751-0

Mark Walmsley: Multi-Threaded Programming in C++, Springer, ISBN 1-85233-146-1

Paul Hyde: Java Thread Programming, Sams, ISBN 0-672-31585-8

Bill Lewis: Threads Primer: A Guide to Multithreaded Programming, Prentice Hall, ISBN
0-13-443698-9

Steve Kleiman, Devang Shah, Bart Smaalders: Programming With Threads, SunSoft
Press, ISBN 0-13-172389-8

Pat Villani: Advanced WIN32 Programming: Files, Threads, and Process


Synchronization, Harpercollins Publishers, ISBN 0-87930-563-0

Jim Beveridge, Robert Wiener: Multithreading Applications in Win32, Addison-Wesley,


ISBN 0-201-44234-5

Pfister, Gregory F. (1998), In search of clusters, Upper Saddle River, NJ: Prentice Hall
PTR, ISBN 978-0138997090, OCLC 38300954

Buyya, Rajkumar; Cortes, Toni; Jin, Hai (2001), "Single System Image", International
Journal of High Performance Computing Applications 15 (2): 124,
doi:10.1177/109434200101500205

Smith, Jonathan M. (1988), "A survey of process migration mechanisms", ACM SIGOPS
Operating Systems Review 22: 28, doi:10.1145/47671.47673

CONTENT
1.0

INTRODUCTION

2.0

PROCESS

2.0.1

History of processes

2.1

Process Representation

2.2

Process State

2.3

Process Management

2.4

Inter-process Communication

2.5

Process Control Block

3.0

PROCESS MIGRATION

3.0.1

Process Checkpointing

3.1

Goal of Process Migration

3.2

Process Migration Algorithm

4.0

THREADING
4.1

Multithreading

4.1.0

Advantages and Disadvantages of Multithreading

4.1.1 Types of Multithreading

5.0

6.0

4.2

Concurrency and Data Structure

4.3

I/O and Scheduling

4.4

Reasons for Threads

THREADS COMPARED WITH PROCESS


5.1

Similarities

5.2

Differences

5.3

Advantages of Threads over Multiple process

5.4

Disadvantages of Threads over Multiple Processes


IMPLEMENTATION

A
TERM PAPER
ON
PROCESS, PROCESS MIGRATION AND THREADING IN OPERATING SYSTEM

COMPLETED

BY

SUBMITTED TO
MRS.

DARAMOLA

IN PARTIAL FULFILMENT OF THE AWARD OF


B.TECH

IN
COMPUTER SCIENCE
IN
THE FEDERAL UNIVERSITY OF TECHNOLOGY AKURE,
ONDO STATE

November 2009