You are on page 1of 21

THREADS

Single and Multithreaded Processes A thread is a fundamental unit of CPU utilization that forms the basis of multithreaded computer systems

Benefits • Responsiveness • Resource Sharing • Economy • Utilization of MP Architectures .

Multithreaded Server Architecture This architecture is not expensive as creation & switching amongst processes is costly .

User & Kernel Threads User Threads: Thread management done by user-level threads library Three primary thread libraries: – Java threads Kernel Threads. Supported by the Kernel Examples – Windows XP/2000 – Solaris – Linux .

Applications are typically divided into processes during the design phase. By contrast. in other words. However. . because they share the same variables. use their own address spaces. are an architectural construct.Threads vs. Processes. a thread is a coding construct that doesn't affect the architecture of an application. and can communicate with each other directly. and only interact with each other via interprocess communication mechanisms. Processes Both threads and processes are methods of parallelizing an application. all threads within a process share the same state and same memory space. A single process might contains multiple threads.. processes are independent execution units that contain their own state information.

Advantages of Thread A process with multiple threads make a great server for example printer server. SP and registers.the biggest drawback is that there is protection between threads. Because of the very nature. But this cheapness does not come free . Threads are economical in the sense that: They only need a stack and storage for registers therefore. threads are cheap create. Because threads can share common data. they do not need to use interprocess communication. The reason is that we only have to save and/or restore PC. threads do not need new address space Context switching are fast when working with threads. to Threads use very little resources of an operating system in which they are working. threads can take advantage of multiprocessors. That is. no .

User-level threads does not require modification to operating systems. In fact.User Level Threads User-level threads implement in user-level libraries. even if there are runable threads left in the processes. Advantages: The most obvious advantage of this technique is that a user-level threads package can be implemented on an Operating System that does not support threads. entire process will blocked in the kernel. so thread switching does not need to call operating system and to cause interrupt to the kernel. if one thread causes a page fault. Therefore. rather than via systems calls. User-level threads requires non-blocking systems call i. Simple Management: This simply means that creating a thread. process as 0whole gets one time slice irrespect of whether process has one thread or 1000 threads within. the process blocks. Disadvantages: There is a lack of coordination between threads and operating system kernel. . the kernel knows nothing about user-level threads and manages them as if they were single-threaded processes. Otherwise.e. a multithreaded kernel. For example.. switching between threads and synchronization between threads can all be done without intervention of the kernel.

Kernel-Level Threads In this method. Kernel-level threads are especially good for applications that frequently block. For instance. Advantages: Because kernel has full knowledge of all threads. Scheduler may decide to give more time to a process having large number of threads than process having small number of threads. . threads operations are hundreds of times slower than that of user-level threads. the kernel knows about and manages the threads. Disadvantages: The kernel-level threads are slow and inefficient.

they require space to store. the PC. a system call of one thread will block the whole process and CPU may be idle during the blocking period. the SP. In other words. an extensive sharing among threads there is a potential problem of security. and the general-purpose registers. . Information about open files of I/O devices in use. It is quite possible that one thread over writes the stack of another thread (or damaged shared data) although it is very unlikely since threads are meant to cooperate on a single task. for example. Security Since there is. but they do not require space to share memory information. With so little context. Disadvantages of Threads over Multiprocesses Blocking The major disadvantage if that if the kernel is single threaded. data section. it is relatively easier for a context switch using threads. Operating System resources like open file etc. For example.Advantages of Threads over Multiple Processes Context Switching Threads are very inexpensive to create and destroy. sharing code section. it is much faster to switch between threads. etc. Sharing Treads allow the sharing of a lot resources that cannot be shared in process.

a program that displays the time of the day would not benefit from multiple threads. and outputs could have three threads. For example. Application that cannot Benefit from Threads Any sequential process that cannot be divided into parallel task will not benefit from thread.Application that Benefits from Threads A proxy server satisfying the requests for a number of computers on a LAN would be benefited by a multi-threaded process. . For example. one for each task. In general. process it. any program that has to do more than one task at a time could benefit from multitasking. as they would block until the previous one completes. a program that reads input.

Synchronization .

Background • Concurrent access to shared data may result in data inconsistency (e. count is set to 0. – Initially. – We can do so by having an integer count that keeps track of the number of full buffers..g. – Incremented by producer after producing a new buffer – Decremented by consumer after consuming a buffer . due to race conditions) • Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes • Suppose that we wanted to provide a solution to the consumer-producer problem that fills all the buffers.

• A race condition occurs when multiple processes access and manipulate the same data concurrently. and the outcome of the execution depends on the particular order in which the access takes .

in which the processes may be accessing common variables.Critical Sections A section of code. common to n cooperating processes. . Exit Section The end of the critical section. Critical Section Code in which only one process can execute at any one time. Remainder Section Rest of the code AFTER the critical section. A Critical Section Environment contains: Entry Section Code requesting entry into the critical section. releasing or allowing others in.

Key Idea: Only one process may be active within the monitor at a time Hardware • Test-and-Set: atomic machine-level instruction – In computer science.e. OS/2) – – based on low-level techniques such as busy waiting or hardware assistance described in more detail below • Monitors: programming language technique.Solution to Critical-Section Problem Software • Peterson's Algorithm: based on busy waiting • Semaphores: general facility provided by operating system (e. the test-and-set instruction is an instruction used to write to a memory location and return its old value as a single atomic (i. If multiple processes may access the same memory. no other process may begin another test-and-set until the first process is done • Swap: atomically swaps contents of two words .g. non-interruptible) operation.. and if a process is currently performing a test-and-set.

You must acquire the semaphore before using the resource.Peterson's Algorithm • a simple algorithm that can be run by two processes to ensure mutual exclusion for one resource (say one variable or data structure) • does not require any special hardware • it uses busy waiting (a spinlock) Semaphore a variable used for signalling between processes – two main operations on semaphore are: • Wait for next process (or acquire) • Signal to other process (or release) A resource such as a shared data structure is protected by a semaphore. .

Deadlock and Starvation • Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes • Let S and Q be two semaphores initialized to 1 P0 P1 wait (S). • Starvation – indefinite blocking. . . . A process may never be removed from the semaphore queue in which it is suspended. wait (Q). wait (Q). wait (S). signal (S). . . signal (Q). signal (S). signal (Q). . .

Bounded Buffer problem Producer • creates data and adds to the buffer • do not want to overflow the buffer Consumer • removes data from buffer (consumes it) • do not want to get ahead of producer .

. do not perform any updates – Writers – can both read and write. – Only one writer can access the shared data at the same time. • Problem – Allow multiple readers to read at the same time.Reader & Writer • data set is shared among a number of concurrent processes – Readers – only read the data set.

Each philosopher can only use the forks on his immediate left and immediate right. they are not thinking. • Shared data – Bowl of rice (data set) – Semaphore chopstick [5] initialized to 1 . A fork is placed in between each pair of adjacent philosophers. As spaghetti is difficult to serve and eat with a single fork. each philosopher has one fork to his left and one fork to his right. and as such. and while thinking.Dining-Philosophers Problem • The dining philosophers problem is summarized as five philosophers sitting at a table doing one of two things: eating or thinking. it is assumed that a philosopher must eat with two forks. they are not eating. While eating. The five philosophers sit at a circular table with a large bowl of spaghetti in the center.