You are on page 1of 9

Life Cycle of a Thread

A new thread begins its lifecyle in the Unstarted state. The thread remains in the
Unstarted state until the program calls Thread method Start, which places the
thread in the Started state (sometimes called the Ready or Runnable state) and
immediately returns control to the calling thread. Then the thread that invoked
Start, the newly Started thread and any other threads in the program execute
concurrently.

The highest priority Started thread enters the Running state (i.e., begins
executing) when the operating system assigns a processor to the thread (Section
12.3 discusses thread priorities). When a Started thread receives a processor for
the first time and becomes a Running thread, the thread executes its
ThreadStart delegate, which specifies the actions the thread will perform during
its lifecyle. When a program creates a new Thread, the program specifies the
Thread's ThreadStart delegate as the argument to the Thread constructor. The
ThreadStart delegate must be a method that returns void and takes no
arguments.
A Running thread enters the Stopped (or Dead) state when its ThreadStart
delegate terminates. Note that a program can force a thread into the Stopped
state by calling Thread method Abort on the appropriate Thread object. Method

Abort throws a ThreadAbortException in the thread, normally causing the


thread to terminate. When a thread is in the Stopped state and there are no
references to the thread object, the garbage collector can remove the thread
object from memory.
A thread enters the Blocked state when the thread issues an input/output
request. The operating system blocks the thread from executing until the
operating system can complete the I/O for which the thread is waiting. At that
point, the thread returns to the Started state, so it can resume execution. A
Blocked thread cannot use a processor even if one is available.
There are three ways in which a Running thread enters the WaitSleepJoin state.
If a thread encounters code that it cannot execute yet (normally because a
condition is not satisfied), the thread can call Monitor method Wait to enter the
WaitSleepJoin state. Once in this state, a thread returns to the Started state
when another thread invokes Monitor method Pulse or PulseAll. Method Pulse
moves the next waiting thread back to the Started state. Method PulseAll moves
all waiting threads back to the Started state.
A Running thread can call Thread method Sleep to enter the WaitSleepJoin
state for a period of milliseconds specified as the argument to Sleep. A sleeping
thread returns to the Started state when its designated sleep time expires.
Sleeping threads cannot use a processor, even if one is available.
Any thread that enters the WaitSleepJoin state by calling Monitor method Wait
or by calling Thread method Sleep also leaves the WaitSleepJoin state and
returns to the Started state if the sleeping or waiting Thread's Interrupt method
is called by another thread in the program.
If a thread cannot continue executing (we will call this the dependent thread)
unless another thread terminates, the dependent thread calls the other thread's
Join method to "join" the two threads. When two threads are "joined," the
dependent thread leaves the WaitSleepJoin state when the other thread finishes
execution (enters the Stopped state). If a Running Thread's Suspend method is
called, the Running thread enters the Suspended state. A Suspended thread
returns to the Started state when another thread in the program invokes the
Suspended thread's Resume method.
Thread Priorities and Thread Scheduling
Every thread has a priority in the range between ThreadPriority.Lowest to
ThreadPriority.Highest. These two values come from the ThreadPriority
enumeration (namespace System.Threading). The enumeration consists of the
values Lowest, BelowNormal, Normal, AboveNormal and Highest. By default,
each thread has priority Normal.

The Windows operating system supports a concept, called timeslicing, that


enables threads of equal priority to share a processor. Without timeslicing, each
thread in a set of equal-priority threads runs to completion (unless the thread
leaves the Running state and enters the WaitSleepJoin, Suspended or Blocked
state) before the thread's peers get a chance to execute. With timeslicing, each
thread receives a brief burst of processor time, called a quantum, during which
the thread can execute. At the completion of the quantum, even if the thread has
not finished executing, the processor is taken away from that thread and given to
the next thread of equal priority, if one is available.
The job of the thread scheduler is to keep the highest-priority thread running at
all times and, if there is more than one highest-priority thread, to ensure that all
such threads execute for a quantum in round-robin fashion (i.e., these threads
can be timesliced).
Fig illustrates the multilevel priority queue for threads. In Fig. 12.2, assuming a
single-processor computer, threads A and B each execute for a quantum in
round-robin fashion until both threads complete execution. This means that A
gets a quantum of time to run. Then B gets a quantum. Then A gets another
quantum. Then B gets another quantum. This continues until one thread
completes. The processor then devotes all its power to the thread that remains
(unless another thread of that priority is Started). Next, thread C runs to
completion. Threads D, E and F each execute for a quantum in round-robin
fashion until they all complete execution. This process continues until all threads
run to completion. Note that, depending on the operating system, new higherpriority threads could postponepossibly indefinitelythe execution of lowerpriority threads. Such indefinite postponement often is referred to more colorfully
as starvation.
A thread's priority can be adjusted with the Priority property, which accepts
values from the ThreadPriority enumeration. If the argument is not one of the
valid thread-priority constants, an ArgumentException occurs. A thread
executes until it dies, becomes Blocked for input/output (or some other reason),
calls Sleep, calls Monitor method Wait or Join, is preempted by a thread of
higher priority or has its quantum expire. A thread with a higher priority than the
Running thread can become Started (and hence preempt the Running thread) if
a sleeping thread wakes up, if I/O completes for a thread that Blocked for that
I/O, if either Pulse or PulseAll is called on an object on which Wait was called,
or if a thread to which the high-priority thread was Joined completes.
12.4 Summary
Computers perform multiple operations concurrently. Programming languages
generally provide only a simple set of control structures that enable programmers
to perform just one action at a time and proceed to the next action only after the

previous one finishes. The FCL, however, provides the C# programmer with the
ability to specify that applications contain threads of execution, where each
thread designates a portion of a program that may execute concurrently with
other threads. This capability is called multithreading.
A thread is initialized using the Thread class's constructor, which receives a
ThreadStart delegate. This delegate specifies the method that contains the tasks
a thread will perform. A thread remains in the Unstarted state until the thread's
Start method is called, which the thread enters the Started state. A thread in the
Started state enters the Running state when the system assigns a processor to
the thread. The system assigns the processor to the highest-priority Started
thread. A thread enters the Stopped state when its ThreadStart delegate
completes or terminates. A thread is forced into the Stopped state when its Abort
method is called (by itself or by another thread). A Running thread enters the
Blocked state when the thread issues an input/output request. A Blocked thread
becomes Started when the I/O it is waiting for completes. A Blocked thread
cannot use a processor, even if one is available.
If a thread needs to sleep, it calls method Sleep. A thread wakes up when the
designated sleep interval expires. If a thread cannot continue executing unless
another thread terminates, the first thread, referred to as the dependent thread,
calls the other thread's Join method to "join" the two threads. When two threads
are joined, the dependent thread leaves the WaitSleepJoin state when the other
thread finishes execution. When a thread encounters code that it cannot yet run,
the thread can call Monitor method Wait until certain actions occur that enable
the thread to continue executing. This method call puts the thread into the
WaitSleepJoin state. Any thread in the WaitSleepJoin state can leave that state if
another thread invokes Thread method Interrupt on the thread in the
WaitSleepJoin state. If a thread has called Monitor method Wait, a
corresponding call to Monitor method Pulse or PulseAll by another thread in the
program will transition the original thread from the WaitSleepJoin state to the
Started state.
If Thread method Suspend is called on a thread, the thread enters the
Suspended state. A thread leaves the Suspended state when a separate thread
invokes Thread method Resume on the suspended thread.
Every C# thread has a priority. The job of the thread scheduler is to keep the
highest-priority thread running at all times and, if there is more than one highestpriority thread, to ensure that all equally high-priority threads execute for a
quantum at a time in round-robin fashion. A thread's priority can be adjusted with
the Priority property, which is assigned an argument from the ThreadPriority
enumeration.

Thread Pooling
You can use thread pooling to make much more efficient use of multiple threads,
depending on your application. Many applications use multiple threads, but often
those threads spend a great deal of time in the sleeping state waiting for an
event to occur. Other threads might enter a sleeping state and be awakened only
periodically to poll for a change or update status information before going to
sleep again. Using thread pooling provides your application with a pool of worker
threads that are managed by the system, allowing you to concentrate on
application tasks rather than thread management. In fact, if you have a number
of short tasks that require more than one thread, using the ThreadPool class is
the easiest and best way to take advantage of multiple threads. Using a thread
pool enables the system to optimize this for better throughput not only for this
process but also with respect to other processes on the computer, something
your application will know nothing about. Using a thread pool enables the system
to optimize thread time slices taking into account all the current processes on
your computer.
The .NET Framework uses thread pools for several purposes: asynchronous
calls, System.Net socket connections, asynchronous I/O completion, and timers
and registered wait operations, among others.
You use the thread pool by calling ThreadPool.QueueUserWorkItem from
managed code (or CorQueueUserWorkItem from unmanaged code) and
passing a WaitCallback delegate wrapping the method that you want to add to
the queue. You can also queue work items that are related to a wait operation to
the thread pool by using ThreadPool.RegisterWaitForSingleObject and passing a
WaitHandle that, when signaled or when timed out, raises a call to the method
wrapped by the WaitOrTimerCallback delegate. In both cases, the thread pool
uses or creates a background thread to invoke the callback method.
You can also use the unsafe methods ThreadPool.UnsafeQueueUserWorkItem
and ThreadPool.UnsafeRegisterWaitForSingleObject when you know that the
caller's stack is irrelevant to any security checks performed during the execution
of the queued task. QueueUserWorkItem and RegisterWaitForSingleObject
both capture the caller's stack, which is merged into the stack of the thread pool
thread when the thread pool thread starts to execute a task. If a security check is
required, that entire stack must be checked. Although the check provides safety,
it also has a performance cost. Using the Unsafe method calls does not provide
complete safety, but it will provide better performance.
There is only one ThreadPool object per process. The thread pool is created the
first time you call ThreadPool.QueueUserWorkItem, or when a timer or
registered wait operation queues a callback method. One thread monitors all
tasks that have been queued to the thread pool. When a task has completed, a

thread from the thread pool executes the corresponding callback method. There
is no way to cancel a work item after it has been queued.
The number of operations that can be queued to the thread pool is limited only by
available memory; however, the thread pool will enforce a limit on the number of
threads it allows to be active in the process simultaneously (which is subject to
the number of CPUs and other considerations). Each thread uses the default
stack size, runs at the default priority, and is in the multithreaded apartment. If
one of the threads becomes idle (as when waiting on an event) in managed
code, the thread pool injects another worker thread to keep all the processors
busy. If all thread pool threads are constantly busy, but there is pending work in
the queue, the thread pool will, after some period of time, create another worker
thread. However, the number of threads will never exceed the maximum value.
The ThreadPool also switches to the correct AppDomain when executing
ThreadPool callbacks.
There are several scenarios in which it is appropriate to create and manage your
own threads instead of using the ThreadPool. You should do so:

If you require a task to have a particular priority.

If you have a task that might run a long time (and therefore block other

tasks).
If you need to place threads into a single-threaded apartment (all

ThreadPool threads are in the multithreaded apartment).


If you need to have a stable identity associated with the thread. For
example, you might want to use a dedicated thread to abort that thread,
suspend it, or discover it by name.

Timer
Timers are lightweight objects that enable you to specify a delegate to be called
at a specified time. A thread in the thread pool performs the wait operation.
Using the Timer class is straightforward. You create a Timer, passing a
TimerCallback delegate to the callback method, an object representing state that
will be passed to the callback, an initial raise time, and a time representing the

period between callback invocations. To cancel a pending timer, call the


Timer.Dispose function.
Note There is also a System.Windows.Forms.Timer class. That
class is based on operating system timer support, and if you are not
pumping messages on the thread, your timer will not occur. This
makes the System.Threading.Timer more useful in many
scenarios.
Monitor
Monitor objects expose the ability to synchronize access to a region of code by
taking and releasing a lock on a particular object using the Monitor.Enter,
Monitor.TryEnter, and Monitor.Exit methods. Once you have a lock on a code
region, you can use the Monitor.Wait, Monitor.Pulse, and Monitor.PulseAll
methods. Wait releases the lock if it is held and waits to be notified. When Wait is
notified, it returns and obtains the lock again. Both Pulse and PulseAll signal for
the next thread in the wait queue to proceed.
Monitor locks objects (that is, reference types), not value types. While you can
pass a value type to Enter and Exit, it is boxed separately for each call. Since
each call creates a separate object, Enter never blocks, and the code it is
supposedly protecting is not really synchronized. In addition, the object passed to
Exit is different from the object passed to Enter, so Monitor throws
SynchronizationLockException with the message "Object synchronization
method was called from an unsynchronized block of code."
WaitHandle

It is important to note the distinction between use of Monitor and WaitHandle


objects. Monitor objects are purely managed, fully portable, and might be more
efficient in terms of operating-system resource requirements. WaitHandle
objects represent operating-system waitable objects, are useful for synchronizing
between managed and unmanaged code, and expose some advanced
operating-system features like the ability to wait on many objects at once.
The WaitHandle class encapsulates Win32 synchronization handles, and is used
to represent all synchronization objects in the runtime that allow multiple wait
operations. It is important to realize that although WaitHandle objects represent
operating-system synchronization objects and therefore expose advanced
functionality, they are also less portable than Monitor, which is fully managed
and in some circumstances is more efficient in its use of operating system
resources.

Examples of classes derived from WaitHandle are:

Mutex

AutoResetEvent
ManualResetEvent

Mutex
You can use a Mutex object to synchronize between threads and across
processes. Although Mutex doesn't have all the wait and pulse functionality of
the Monitor class, it does offer the creation of named mutexes that can be used
between processes.
You call WaitOne, WaitAll, or WaitAny to request ownership of the Mutex. The
state of the Mutex is signaled if no thread owns it.
If a thread owns a Mutex, that thread can specify the same Mutex in repeated
wait-request calls without blocking its execution; however, it must release the
Mutex as many times to release ownership.
If a thread terminates normally while owning a Mutex, the state of the Mutex is
set to signaled and the next waiting thread gets ownership. The Mutex class
corresponds to a Win32 CreateMutex call.
Interlocked
The Interlocked methods CompareExchange, Decrement, Exchange, and
Increment provide a simple mechanism for synchronizing access to a variable
that is shared by multiple threads. The threads of different processes can use this
mechanism if the variable is in shared memory.
The Increment and Decrement functions combine the operations of
incrementing or decrementing the variable and checking the resulting value. This
atomic operation is useful in a multitasking operating system, in which the system
can interrupt one thread's execution to grant a slice of processor time to another
thread. Without such synchronization, one thread could increment a variable but
be interrupted by the system before it could check the resulting value of the
variable. A second thread could then increment the same variable. When the first
thread receives its next time slice, it will check the value of the variable, which
has now been incremented not once but twice. The Interlocked variable access
functions protect against this kind of error.
The Exchange function atomically exchanges the values of the specified
variables. The CompareExchange function combines two operations: comparing

two values and storing a third value in one of the variables, based on the
outcome of the comparison. CompareExchange can be used to protect
computations that are more complicated than simple increment and decrement.

ReaderWriterLock
ReaderWriterLock allows multiple threads to read a resource concurrently, but
requires a thread to wait for an exclusive lock in order to write to the resource.
Within your application you might use a ReaderWriterLock to provide
cooperative synchronization among threads that access a shared resource. In
this case, locks are taken on the ReaderWriterLock itself. As with any thread
synchronization mechanism, you must ensure that no threads bypass the
ReaderWriterLock.
Alternatively, you might design a class that encapsulates a resource. This class
might use a ReaderWriterLock to implement its locking scheme for the
resource. ReaderWriterLock uses an efficient design, and thus can be used to
synchronize individual objects.
Structure your application to minimize the duration of reads and writes. Long
writes hurt throughput directly because the write lock is exclusive. Long reads
block waiting writers, and if there is at least one thread waiting for the write lock
then threads that request new reader locks will be blocked as well.

You might also like