Professional Documents
Culture Documents
A new thread begins its lifecyle in the Unstarted state. The thread remains in the
Unstarted state until the program calls Thread method Start, which places the
thread in the Started state (sometimes called the Ready or Runnable state) and
immediately returns control to the calling thread. Then the thread that invoked
Start, the newly Started thread and any other threads in the program execute
concurrently.
The highest priority Started thread enters the Running state (i.e., begins
executing) when the operating system assigns a processor to the thread (Section
12.3 discusses thread priorities). When a Started thread receives a processor for
the first time and becomes a Running thread, the thread executes its
ThreadStart delegate, which specifies the actions the thread will perform during
its lifecyle. When a program creates a new Thread, the program specifies the
Thread's ThreadStart delegate as the argument to the Thread constructor. The
ThreadStart delegate must be a method that returns void and takes no
arguments.
A Running thread enters the Stopped (or Dead) state when its ThreadStart
delegate terminates. Note that a program can force a thread into the Stopped
state by calling Thread method Abort on the appropriate Thread object. Method
previous one finishes. The FCL, however, provides the C# programmer with the
ability to specify that applications contain threads of execution, where each
thread designates a portion of a program that may execute concurrently with
other threads. This capability is called multithreading.
A thread is initialized using the Thread class's constructor, which receives a
ThreadStart delegate. This delegate specifies the method that contains the tasks
a thread will perform. A thread remains in the Unstarted state until the thread's
Start method is called, which the thread enters the Started state. A thread in the
Started state enters the Running state when the system assigns a processor to
the thread. The system assigns the processor to the highest-priority Started
thread. A thread enters the Stopped state when its ThreadStart delegate
completes or terminates. A thread is forced into the Stopped state when its Abort
method is called (by itself or by another thread). A Running thread enters the
Blocked state when the thread issues an input/output request. A Blocked thread
becomes Started when the I/O it is waiting for completes. A Blocked thread
cannot use a processor, even if one is available.
If a thread needs to sleep, it calls method Sleep. A thread wakes up when the
designated sleep interval expires. If a thread cannot continue executing unless
another thread terminates, the first thread, referred to as the dependent thread,
calls the other thread's Join method to "join" the two threads. When two threads
are joined, the dependent thread leaves the WaitSleepJoin state when the other
thread finishes execution. When a thread encounters code that it cannot yet run,
the thread can call Monitor method Wait until certain actions occur that enable
the thread to continue executing. This method call puts the thread into the
WaitSleepJoin state. Any thread in the WaitSleepJoin state can leave that state if
another thread invokes Thread method Interrupt on the thread in the
WaitSleepJoin state. If a thread has called Monitor method Wait, a
corresponding call to Monitor method Pulse or PulseAll by another thread in the
program will transition the original thread from the WaitSleepJoin state to the
Started state.
If Thread method Suspend is called on a thread, the thread enters the
Suspended state. A thread leaves the Suspended state when a separate thread
invokes Thread method Resume on the suspended thread.
Every C# thread has a priority. The job of the thread scheduler is to keep the
highest-priority thread running at all times and, if there is more than one highestpriority thread, to ensure that all equally high-priority threads execute for a
quantum at a time in round-robin fashion. A thread's priority can be adjusted with
the Priority property, which is assigned an argument from the ThreadPriority
enumeration.
Thread Pooling
You can use thread pooling to make much more efficient use of multiple threads,
depending on your application. Many applications use multiple threads, but often
those threads spend a great deal of time in the sleeping state waiting for an
event to occur. Other threads might enter a sleeping state and be awakened only
periodically to poll for a change or update status information before going to
sleep again. Using thread pooling provides your application with a pool of worker
threads that are managed by the system, allowing you to concentrate on
application tasks rather than thread management. In fact, if you have a number
of short tasks that require more than one thread, using the ThreadPool class is
the easiest and best way to take advantage of multiple threads. Using a thread
pool enables the system to optimize this for better throughput not only for this
process but also with respect to other processes on the computer, something
your application will know nothing about. Using a thread pool enables the system
to optimize thread time slices taking into account all the current processes on
your computer.
The .NET Framework uses thread pools for several purposes: asynchronous
calls, System.Net socket connections, asynchronous I/O completion, and timers
and registered wait operations, among others.
You use the thread pool by calling ThreadPool.QueueUserWorkItem from
managed code (or CorQueueUserWorkItem from unmanaged code) and
passing a WaitCallback delegate wrapping the method that you want to add to
the queue. You can also queue work items that are related to a wait operation to
the thread pool by using ThreadPool.RegisterWaitForSingleObject and passing a
WaitHandle that, when signaled or when timed out, raises a call to the method
wrapped by the WaitOrTimerCallback delegate. In both cases, the thread pool
uses or creates a background thread to invoke the callback method.
You can also use the unsafe methods ThreadPool.UnsafeQueueUserWorkItem
and ThreadPool.UnsafeRegisterWaitForSingleObject when you know that the
caller's stack is irrelevant to any security checks performed during the execution
of the queued task. QueueUserWorkItem and RegisterWaitForSingleObject
both capture the caller's stack, which is merged into the stack of the thread pool
thread when the thread pool thread starts to execute a task. If a security check is
required, that entire stack must be checked. Although the check provides safety,
it also has a performance cost. Using the Unsafe method calls does not provide
complete safety, but it will provide better performance.
There is only one ThreadPool object per process. The thread pool is created the
first time you call ThreadPool.QueueUserWorkItem, or when a timer or
registered wait operation queues a callback method. One thread monitors all
tasks that have been queued to the thread pool. When a task has completed, a
thread from the thread pool executes the corresponding callback method. There
is no way to cancel a work item after it has been queued.
The number of operations that can be queued to the thread pool is limited only by
available memory; however, the thread pool will enforce a limit on the number of
threads it allows to be active in the process simultaneously (which is subject to
the number of CPUs and other considerations). Each thread uses the default
stack size, runs at the default priority, and is in the multithreaded apartment. If
one of the threads becomes idle (as when waiting on an event) in managed
code, the thread pool injects another worker thread to keep all the processors
busy. If all thread pool threads are constantly busy, but there is pending work in
the queue, the thread pool will, after some period of time, create another worker
thread. However, the number of threads will never exceed the maximum value.
The ThreadPool also switches to the correct AppDomain when executing
ThreadPool callbacks.
There are several scenarios in which it is appropriate to create and manage your
own threads instead of using the ThreadPool. You should do so:
If you have a task that might run a long time (and therefore block other
tasks).
If you need to place threads into a single-threaded apartment (all
Timer
Timers are lightweight objects that enable you to specify a delegate to be called
at a specified time. A thread in the thread pool performs the wait operation.
Using the Timer class is straightforward. You create a Timer, passing a
TimerCallback delegate to the callback method, an object representing state that
will be passed to the callback, an initial raise time, and a time representing the
Mutex
AutoResetEvent
ManualResetEvent
Mutex
You can use a Mutex object to synchronize between threads and across
processes. Although Mutex doesn't have all the wait and pulse functionality of
the Monitor class, it does offer the creation of named mutexes that can be used
between processes.
You call WaitOne, WaitAll, or WaitAny to request ownership of the Mutex. The
state of the Mutex is signaled if no thread owns it.
If a thread owns a Mutex, that thread can specify the same Mutex in repeated
wait-request calls without blocking its execution; however, it must release the
Mutex as many times to release ownership.
If a thread terminates normally while owning a Mutex, the state of the Mutex is
set to signaled and the next waiting thread gets ownership. The Mutex class
corresponds to a Win32 CreateMutex call.
Interlocked
The Interlocked methods CompareExchange, Decrement, Exchange, and
Increment provide a simple mechanism for synchronizing access to a variable
that is shared by multiple threads. The threads of different processes can use this
mechanism if the variable is in shared memory.
The Increment and Decrement functions combine the operations of
incrementing or decrementing the variable and checking the resulting value. This
atomic operation is useful in a multitasking operating system, in which the system
can interrupt one thread's execution to grant a slice of processor time to another
thread. Without such synchronization, one thread could increment a variable but
be interrupted by the system before it could check the resulting value of the
variable. A second thread could then increment the same variable. When the first
thread receives its next time slice, it will check the value of the variable, which
has now been incremented not once but twice. The Interlocked variable access
functions protect against this kind of error.
The Exchange function atomically exchanges the values of the specified
variables. The CompareExchange function combines two operations: comparing
two values and storing a third value in one of the variables, based on the
outcome of the comparison. CompareExchange can be used to protect
computations that are more complicated than simple increment and decrement.
ReaderWriterLock
ReaderWriterLock allows multiple threads to read a resource concurrently, but
requires a thread to wait for an exclusive lock in order to write to the resource.
Within your application you might use a ReaderWriterLock to provide
cooperative synchronization among threads that access a shared resource. In
this case, locks are taken on the ReaderWriterLock itself. As with any thread
synchronization mechanism, you must ensure that no threads bypass the
ReaderWriterLock.
Alternatively, you might design a class that encapsulates a resource. This class
might use a ReaderWriterLock to implement its locking scheme for the
resource. ReaderWriterLock uses an efficient design, and thus can be used to
synchronize individual objects.
Structure your application to minimize the duration of reads and writes. Long
writes hurt throughput directly because the write lock is exclusive. Long reads
block waiting writers, and if there is at least one thread waiting for the write lock
then threads that request new reader locks will be blocked as well.