You are on page 1of 21

THREADS

Single and Multithreaded Processes A thread is a fundamental unit of CPU utilization that forms the basis of multithreaded computer systems

Benefits

Responsiveness Resource Sharing Economy Utilization of MP Architectures

Multithreaded Server Architecture


This architecture is not expensive as creation & switching amongst processes is costly

User & Kernel Threads


User Threads: Thread management done by user-level threads library Three primary thread libraries:
Java threads

Kernel Threads; Supported by the Kernel Examples


Windows XP/2000 Solaris Linux

Threads vs. Processes


Both threads and processes are methods of parallelizing an application. However, processes are independent execution units that contain their own state information, use their own address spaces, and only interact with each other via interprocess communication mechanisms. A single process might contains multiple threads; all threads within a process share the same state and same memory space, and can communicate with each other directly, because they share the same variables. Applications are typically divided into processes during the design phase,. Processes, in other words, are an architectural construct. By contrast, a thread is a coding construct that doesn't affect the architecture of an application.

Advantages of Thread
A process with multiple threads make a great server for example printer server. Because threads can share common data, they do not need to use interprocess communication. Because of the very nature, threads can take advantage of multiprocessors. Threads are economical in the sense that: They only need a stack and storage for registers therefore, threads are cheap create.

to

Threads use very little resources of an operating system in which they are working. That is, threads do not need new address space Context switching are fast when working with threads. The reason is that we only have to save and/or restore PC, SP and registers. But this cheapness does not come free - the biggest drawback is that there is protection between threads. no

User Level Threads


User-level threads implement in user-level libraries, rather than via systems calls, so thread switching does not need to call operating system and to cause interrupt to the kernel. In fact, the kernel knows nothing about user-level threads and manages them as if they were single-threaded processes. Advantages: The most obvious advantage of this technique is that a user-level threads package can be implemented on an Operating System that does not support threads. User-level threads does not require modification to operating systems. Simple Management: This simply means that creating a thread, switching between threads and synchronization between threads can all be done without intervention of the kernel. Disadvantages: There is a lack of coordination between threads and operating system kernel. Therefore, process as 0whole gets one time slice irrespect of whether process has one thread or 1000 threads within. User-level threads requires non-blocking systems call i.e., a multithreaded kernel. Otherwise, entire process will blocked in the kernel, even if there are runable threads left in the processes. For example, if one thread causes a page fault, the process blocks.

Kernel-Level Threads In this method, the kernel knows about and manages the threads. Advantages: Because kernel has full knowledge of all threads, Scheduler may decide to give more time to a process having large number of threads than process having small number of threads. Kernel-level threads are especially good for applications that frequently block. Disadvantages: The kernel-level threads are slow and inefficient. For instance, threads operations are hundreds of times slower than that of user-level threads.

Advantages of Threads over Multiple Processes Context Switching Threads are very inexpensive to create and destroy. For example, they require space to store, the PC, the SP, and the general-purpose registers, but they do not require space to share memory information, Information about open files of I/O devices in use, etc. With so little context, it is much faster to switch between threads. In other words, it is relatively easier for a context switch using threads. Sharing Treads allow the sharing of a lot resources that cannot be shared in process, for example, sharing code section, data section, Operating System resources like open file etc. Disadvantages of Threads over Multiprocesses Blocking The major disadvantage if that if the kernel is single threaded, a system call of one thread will block the whole process and CPU may be idle during the blocking period. Security Since there is, an extensive sharing among threads there is a potential problem of security. It is quite possible that one thread over writes the stack of another thread (or damaged shared data) although it is very unlikely since threads are meant to cooperate on a single task.

Application that Benefits from Threads


A proxy server satisfying the requests for a number of computers on a LAN would be benefited by a multi-threaded process. In general, any program that has to do more than one task at a time could benefit from multitasking. For example, a program that reads input, process it, and outputs could have three threads, one for each task.

Application that cannot Benefit from Threads


Any sequential process that cannot be divided into parallel task will not benefit from thread, as they would block until the previous one completes. For example, a program that displays the time of the day would not benefit from multiple threads.

Synchronization

Background
Concurrent access to shared data may result in data inconsistency (e.g., due to race conditions) Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes Suppose that we wanted to provide a solution to the consumer-producer problem that fills all the buffers.
We can do so by having an integer count that keeps track of the number of full buffers. Initially, count is set to 0. Incremented by producer after producing a new buffer Decremented by consumer after consuming a buffer

A race condition occurs when multiple processes access and manipulate the same data concurrently, and the outcome of the execution depends on the particular order in which the access takes

Critical Sections

A section of code, common to n cooperating processes, in which the processes may be accessing common variables.
A Critical Section Environment contains: Entry Section Code requesting entry into the critical section. Critical Section Code in which only one process can execute at any one time. Exit Section The end of the critical section, releasing or allowing others in. Remainder Section Rest of the code AFTER the critical section.

Solution to Critical-Section Problem


Software Peterson's Algorithm: based on busy waiting Semaphores: general facility provided by operating system (e.g., OS/2)
based on low-level techniques such as busy waiting or hardware assistance described in more detail below

Monitors: programming language technique. Key Idea: Only one process may be active within the monitor at a time Hardware Test-and-Set: atomic machine-level instruction
In computer science, the test-and-set instruction is an instruction used to write to a memory location and return its old value as a single atomic (i.e. non-interruptible) operation. If multiple processes may access the same memory, and if a process is currently performing a test-and-set, no other process may begin another test-and-set until the first process is done

Swap: atomically swaps contents of two words

Peterson's Algorithm a simple algorithm that can be run by two processes to ensure mutual exclusion for one resource (say one variable or data structure) does not require any special hardware it uses busy waiting (a spinlock) Semaphore a variable used for signalling between processes two main operations on semaphore are:
Wait for next process (or acquire) Signal to other process (or release)

A resource such as a shared data structure is protected by a semaphore. You must acquire the semaphore before using the resource.

Deadlock and Starvation


Deadlock two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes Let S and Q be two semaphores initialized to 1 P0 P1
wait (S); wait (Q); . . . signal (S); signal (Q); wait (Q); wait (S); . . . signal (Q); signal (S);

Starvation indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended.

Bounded Buffer problem


Producer creates data and adds to the buffer do not want to overflow the buffer Consumer removes data from buffer (consumes it) do not want to get ahead of producer

Reader & Writer


data set is shared among a number of concurrent processes
Readers only read the data set; do not perform any updates Writers can both read and write.

Problem
Allow multiple readers to read at the same time. Only one writer can access the shared data at the same time.

Dining-Philosophers Problem

The dining philosophers problem is summarized as five philosophers sitting at a table doing one of two things: eating or thinking. While eating, they are not thinking, and while thinking, they are not eating. The five philosophers sit at a circular table with a large bowl of spaghetti in the center. A fork is placed in between each pair of adjacent philosophers, and as such, each philosopher has one fork to his left and one fork to his right. As spaghetti is difficult to serve and eat with a single fork, it is assumed that a philosopher must eat with two forks. Each philosopher can only use the forks on his immediate left and immediate right. Shared data Bowl of rice (data set) Semaphore chopstick [5] initialized to 1

You might also like