You are on page 1of 7

Question 01 Briefly explain two benefits of threads in an operating system.

First let us examine what a process and thread are. Process: A collection of one or more threads and associated system resources (such as memory containing both code and data, open files, and devices). This corresponds closely to the concept of a program in execution. By breaking a single application into multiple threads, the programmer has great control over the modularity of the application and the timing of application-related events. Thread: A dispatchable unit of work. It includes a processor context (which includes the program counter and stack pointer) and its own data area for a stack (to enable subroutine branching). A thread executes sequentially and is interruptible so that the processor can turn to another thread. Below are two benefits of using threads. 1. It takes far less time to create a new thread in an existing process than to create a brandnew process. The reason for this is that when creating a new thread the operating system does not have to allocate new resources for it. It will simply share the resources already allocated to the process. This greatly reduces the load on the system. Similarly, terminating and switching between threads is also faster than in processes.

2. Threads enhance efficiency in communication between different executing programs. In most operating systems, communication between independent processes requires the intervention of the kernel to provide protection and the mechanisms needed for communication. However, because threads within the same process share memory and files, they can communicate with each other without invoking the kernel.

Question 02

and termination are  Many threads can be created. or one kernel thread for N user threads. Cons:  Kernel scheduler doesn’t know about threads so they can’t be scheduled across CPUs. though. and it is possible to create huge numbers of threads in user-space. all the user-space threads execute on the same CPU and cannot take advantage of true parallel execution. cleanup. execution. thread creation. The major advantage of this model is that thread creation. Pros:  Thread creation.  Blocking I/O operations can block all the threads In this model a process manages thread creation. is a model that is commonly called “lightweight threads. termination. One of the major downsides is it not being able to utilize the kernel’s scheduler. 2 . As a result. and context switches can therefore be highly efficient.User-level threading The 1:N model. and synchronization is extremely simple and require less resources.” Basically the library maps all threads to a single lightweight process (LWP). termination. This model has several disadvantages. Since the kernel isn't involved in any thread life-cycle events or context switches within the same process.Explain the following thread models a) Many-to-One Model b) One-to-One Model c) Many-to-Many Model One-to-Many (1:N) . One way to cope with this is to create multiple processes (using fork()) and then have the processes communicate with each other. and scheduling completely on its own without the help or knowledge of the kernel. deletion. But two main problems arise as a result of the Kernel’s non involvement in the threads. Cannot take advantage of parallel processing.

2. all threads in that process block until the operation completes. If one thread issues a blocking operation. Irrespective of the number of host CPUs. All threads in a process contend for that CPU.'s Thread Manager REALbasic (includes an API for cooperative threading) Netscape Portable Runtime (includes a user-space fibers implementation) 3 . to read() from or write() to a file. These wrappers aren’t completely fool-proof and may restrict or change the behavior of the application.1. sharing any time-slice allocation the kernel may use in its process scheduling. each process is scheduled onto only one each. for example. Many 1:N implementations provide wrappers around OS system functions to reduce this restriction. User-level implementation examples      GNU Portable Threads FSU Pthreads Apple Inc.

the other threads in the process can be scheduled and executed in the intervening time.Kernel-level threading The One-to-One model. thus allowing the use of parallelism.   Threads do not block each other Shared memory Cons:  Setup overhead. Multithreaded applications can take advantage of multiple CPUs if they are available. the scheduler can schedule threads created in the 1:1 model across different CPUs to execute in parallel to each other. 2.” Here the library maps each thread to a different lightweight process. Pros:  Threads can execute on different CPUs. A by-product of this is that if a thread executes a system call that blocks. The two commonly interact via system calls and signals. operations can be more costly than with the N:1 model. 4 . Each thread takes up kernel resources and do not block each other   A creation of a thread involve the creation of a Light-weight process Low limits on the number of threads which can be created In this model each user-thread (execution state in user-space) is paired with a kernel-thread (execution state in kernel-space). In this model. other threads can continue to make progress. Since state exists in the kernel. Since the OS kernel is when a new thread is created in user-space so corresponding state can be created in the kernel. or one kernel thread for each user thread. It is sometimes referred to as “native threads. If the kernel blocks one thread in a system function.One-to-One (1:1) . though generally still cheaper than process life-cycle operations. is a very widespread model that is seen in many operating systems. The 1:1 model fixes the following two problems with the 1:N model which were discussed earlier: 1. different threads can share the same virtual address space but care must be taken to synchronize access to the same memory regions.

0 and later.  Microsoft Windows from Windows 95 and Windows NT onwards.6 and later which was modified to support it. 5 . uses the built-in nanokernel in Mac OS 8.Kernel-level implementation examples   Light Weight Kernel Threads (LWKT) in various BSDs Native POSIX Thread Library (NPTL) for Linux. an implementation of the POSIX Threads (pthreads) standard  Apple Multiprocessing Services version 2.

In this model. and cleanup Cons:  Need scheduler in user-space and kernel to work with each other   Native threads doing blocking I/O operations will block all other native threads sharing same kernel thread Difficult to write. As in the 1:1 model.Many-to-Many (M:N) . the library has two kinds of threads: bound and unbound – – Bound threads are mapped each to a single lightweight process Unbound threads may be mapped to the same LWP Pros:  Take advantage of multiple CPUs   Not all threads are blocked by blocking system calls Cheap creation. however. execution. maintain. or M kernel threads for N user threads. The OS threading library creates a user-space thread. is a model that is a hybrid of the previous two models. As in the N:1 model. and debug code This model supports a mix of user threads and kernel threads. When an application initiates a thread.Hybrid threading The M:N model. but only creates a kernel thread if needed or if the application explicitly requests the system to do so. which themselves map 1-to-1 onto kernel threads. the OS kernel schedules kernel threads onto CPUs. the OS threading library schedules user-space threads onto so-called "lightweight processes" (LWPs). it can indicate in which scope the thread should operate. 6 .

Here multiple user-space threads can block when one of them issues a blocking system function. Hybrid implementation examples  Scheduler activations used by the NetBSD native POSIX threads library implementation (an M:N model as opposed to a 1:1 kernel or userspace implementation model)    Marcel from the PM2 project. in specific problem domains that are well understood M:N may be the right choice. all user threads scheduled onto it by the threads library also block. The OS for the Tera/Cray MTA Microsoft Windows 7 7 . When the OS kernel blocks an LWP. On the other hand. building and synchronizing a user-space scheduler with a kernel scheduler makes programming in this model extremely difficult and error prone. though threads scheduled onto other LWPs in the process can continue to make progress. Unfortunately the cost of the downsides outweighs many of the advantages to such an extent that it isn’t worth it in many cases to build/use an M:N threading model. Research into performance implications and use cases on Linux showed the 1:1 model to be superior in general.This hybrid model appears to be a best of both worlds solution that includes all the advantages of 1:1 and 1:N threading without any of the downsides. In general. There is another problem that resurfaces in the N:M model. Research on M:N threading vs 1:1 threading was done for the Linux kernel to determine how threading was going to evolve.