You are on page 1of 52

Chapter 4: Threads

Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne


Chapter 4: Threads
 Overview
 Multicore Programming
 Multithreading Models
 Thread Libraries
 Implicit Threading
 Threading Issues
 Operating System Examples

Operating System Concepts – 9th Edition 4.2 Silberschatz, Galvin and Gagne
Objectives
 To introduce the notion of a thread.
 To discuss the APIs for the Pthreads, Windows,
and Java thread libraries
 To explore several strategies that provide
implicit threading
 To examine issues related to multithreaded
programming
 To cover operating system support for threads
in Windows and Linux

Operating System Concepts – 9th Edition 4.3 Silberschatz, Galvin and Gagne
Single and Multithreaded Processes
 Thread (lightweight process): a basic unit of CPU utilization
• It comprises: thread ID, program counter, register set,
and a stack

Multiple threads of control in the


same address space running in
• It shares other threads: code, data, and operating
system resources
Each process has an address
space and a single thread of

parallel
control

Heavyweig
ht process

Lightweight

Lightweight

Lightweight
process

process

process
Operating System Concepts – 9th Edition 4.4 Silberschatz, Galvin and Gagne
Why threads?
 Most modern applications are multithreaded
 Word processor (convenience)
 Thread for displaying graphics
 Thread for reading keystrokes form the user
 Thread for spelling and grammar checking
 Thread for reformatting the text
 Thread for auto saving
 Web browser (efficiency)
 Thread to display images and text
 Thread to retrieve data from network
 Thread to listen for the user request
 Web server: requested for pages come in and the requested page is sent
back to the client (similar tasks)
 Thread to read the incoming request for work from the network
 Thread for each client request

Operating System Concepts – 9th Edition 4.5 Silberschatz, Galvin and Gagne
Why threads?
 RPC servers are multithreaded. When a server receives a
message, it services the message using a separate thread. This
allows the server to service several concurrent requests.
 Several threads operate in the kernel, and each thread performs
a specific task, such as managing devices, managing memory,
or interrupt handling.

Operating System Concepts – 9th Edition 4.6 Silberschatz, Galvin and Gagne
Benefits
 Responsiveness
 Multithreading an interactive application may allow
a program to continue running even if part of it is
blocked or is performing lengthy operation.
 Web browser allows user interaction in one thread
while an image is being loaded in anther thread.
 Resource Sharing
 Processes can only share data through techniques
such as shared memory and message passing
 Threads share the memory and the resources of the
process to which they belong (by default).
 Different threads in the same memory space
sharing code section, data section, …
 It allows an application to have several different
threads of activity within the same address space.

Operating System Concepts – 9th Edition 4.7 Silberschatz, Galvin and Gagne
Benefits
 Scalability (of MP Architectures)
 A single-thread process can only run on one CPU no
matter how many are available
 Multi-threading on a multi-CPU machine increases
concurrency.
 Efficiency (Economy)
 Allocating memory and resources for process is
costly
 It is more economical to create and context switch
threads than processes because threads share
resources of the process to which they belong.
 Creating threads is about 30 times faster than
processes
 Context switching threads is about 5 times faster
than processes
Operating System Concepts – 9th Edition 4.8 Silberschatz, Galvin and Gagne
Process Vs. Thread
 Processes
 are typically independent
 carry considerable state information
 have separate address spaces
 interact through inter-process communication
mechanisms
 Threads
 share memory and other resources
 share the state information of a single process
 Use the address space of the parent process
 Context switching between threads in the same
process is typically faster

Operating System Concepts – 9th Edition 4.9 Silberschatz, Galvin and Gagne
Multicore Programming
 Multicore or multiprocessor systems putting pressure on
programmers, challenges include:
 Dividing activities
 Balance
 Data splitting
 Data dependency
 Testing and debugging
 Parallelism implies a system can perform more than one task
simultaneously
 Concurrency supports more than one task making progress
 Single processor / core, scheduler providing concurrency

Operating System Concepts – 9th Edition 4.10 Silberschatz, Galvin and Gagne
Multicore Programming (Cont.)
 Types of parallelism
 Data parallelism – distributes subsets of the same data
across multiple cores, same operation on each
 Task parallelism – distributing threads across cores, each
thread performing unique operation
 As # of threads grows, so does architectural support for
threading
 CPUs have cores as well as hardware threads
 Consider Oracle SPARC T4 with 8 cores, and 8 hardware
threads per core

Operating System Concepts – 9th Edition 4.11 Silberschatz, Galvin and Gagne
Concurrency vs. Parallelism
 Concurrent execution on single-core system:

 Parallelism on a multi-core system:

Operating System Concepts – 9th Edition 4.12 Silberschatz, Galvin and Gagne
Amdahl’s Law
 Identifies performance gains from adding additional cores to an
application that has both serial and parallel components
 S is serial portion
 N processing cores

 That is, if application is 75% parallel / 25% serial, moving from 1 to


2 cores results in speedup of 1.6 times
 As N approaches infinity, speedup approaches 1 / S

Serial portion of an application has disproportionate effect on


performance gained by adding additional cores

 But does the law take into account contemporary multicore


systems?

Operating System Concepts – 9th Edition 4.13 Silberschatz, Galvin and Gagne
User Threads and Kernel Threads
 User threads - management done by user-level
threads library
 Three primary thread libraries:
 POSIX Pthreads
 Windows threads
 Java threads
 Kernel threads - Supported by the Kernel. Virtually all
general purpose operating systems, including:
 Windows
 Solaris
 Linux
 Tru64 UNIX
 Mac OS X

Operating System Concepts – 9th Edition 4.14 Silberschatz, Galvin and Gagne
Multithreading Models
 Many systems provide support for both
user and kernel threads resulting in
different multithreading models:

 Many-to-One

 One-to-One

 Many-to-Many

Operating System Concepts – 9th Edition 4.15 Silberschatz, Galvin and Gagne
Many-to-One
 Maps many user-level threads to one kernel
level thread
 + Thread management is done in the user
space (fast)
 - The entire process will block if a thread
makes a blocking system call
 - Multiple threads are unable to run in parallel
 + It is good for the OS that do not support
kernel threads
 Examples:
 Solaris Green Threads
 GNU Portable Threads

Operating System Concepts – 9th Edition 4.16 Silberschatz, Galvin and Gagne
One-to-One
 Each user-level thread maps to kernel thread
 It provides more concurrency than the many-to one model
 + Allowing another thread to run when a thread makes
a blocking system call
 + Allows multiple threads to run in parallel on
multiprocessor systems
 - Creating a user thread  creating the corresponding
kernel thread
 - Most implementations restricts the number of
threads supported by the system.
 Examples
 Windows
 Linux
 Solaris 9 and later

Operating System Concepts – 9th Edition 4.17 Silberschatz, Galvin and Gagne
Many-to-Many Model
 Allows many user level threads to be mapped to many
kernel threads
 Multiplexes many user-level threads to smaller or equal
number of kernel-level threads
 the developer can create as many user threads as
necessary and the corresponding kernel threads can run
in parallel on a multiprocessor.
 Allows the operating system to create a sufficient
number of kernel threads
 When a thread performs a blocking system call, the
kernel can schedule another thread for execution
 Solaris prior to version 9

Operating System Concepts – 9th Edition 4.18 Silberschatz, Galvin and Gagne
Two-level Model
 Similar to M:M, except that it allows a user
thread to be bound to kernel thread
 Examples
 IRIX
 HP-UX
 Tru64 UNIX
 Solaris 8 and earlier

Operating System Concepts – 9th Edition 4.19 Silberschatz, Galvin and Gagne
Thread Libraries
 A thread library provides the programmer with an
API for creating and managing threads.
 There are two ways of implementing a thread library.
 The first approach is to provide a library entirely in
user space with no kernel support. All code and data
structures for the library exist in user space.
 This means that invoking a function in the library
results in a local function call in user space and not
a system call.
 The second approach is to implement a kernel-level
library supported directly by the operating system.
In this case, code and data structures for the library
exist in kernel space. Invoking a function in the API
for the library typically results in a system call to the
kernel.
Operating System Concepts – 9th Edition 4.20 Silberschatz, Galvin and Gagne
Thread Libraries
 Three main thread libraries are in use today: POSIX
Pthreads, Windows, and Java.
 Pthreads, the threads extension of the POSIX
standard, may be provided as either a user-level or a
kernel-level library.
 The Windows thread library is a kernel-level library
available on Windows systems.
 The Java thread API allows threads to be created
and managed directly in Java programs. However,
because in most instances the JVM is running on top
of a host operating system, the Java thread API is
generally implemented using a thread library
available on the host system.
 This means that on Windows systems, Java threads
are typically implemented using the Windows API;
UNIX and Linux systems often use Pthreads.
Operating System Concepts – 9th Edition 4.21 Silberschatz, Galvin and Gagne
Thread Creation
 Two general strategies for creating multiple threads:
 Asynchronous threading,
threading once the parent creates a child
thread, the parent resumes its execution, so that the parent
and child execute concurrently.
 Each thread runs independently of every other thread, and
the parent thread need not know when its child terminates.
 Synchronous threading occurs when the parent thread
creates one or more children and then must wait for all of
its children to terminate before it resumes
 Here, the threads created by the parent perform work
concurrently, but the parent cannot continue until this work
has been completed.
 Once each thread has finished its work, it terminates and
joins with its parent.

Operating System Concepts – 9th Edition 4.22 Silberschatz, Galvin and Gagne
Pthreads Example

Operating System Concepts – 9th Edition 4.23 Silberschatz, Galvin and Gagne
Pthreads
Pthreads Example
Example (Cont.)
(Cont.)

Operating System Concepts – 9th Edition 4.24 Silberschatz, Galvin and Gagne
Pthreads
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>

void *print_message_function( void *ptr );


main()
{
pthread_t thread1, thread2;
char *message1 = "Thread 1";
char *message2 = "Thread 2";
int iret1, iret2;

/* Create independent threads each of which will execute function */

iret1 = pthread_create( &thread1, NULL, print_message_function, (void*) message1);


iret2 = pthread_create( &thread2, NULL, print_message_function, (void*) message2);

Operating System Concepts – 9th Edition 4.25 Silberschatz, Galvin and Gagne
Pthreads
/* Wait till threads are complete before main
continues. */
pthread_join( thread1, NULL);
pthread_join( thread2, NULL);
printf("Thread 1 returns: %d\n",iret1);
printf("Thread 2 returns: %d\n",iret2);
exit(0); }
void *print_message_function( void *ptr ){
char *message;
message = (char *) ptr;
printf("%s \n", message);}

Operating System Concepts – 9th Edition 4.26 Silberschatz, Galvin and Gagne
Pthreads

 Thread 1
 Thread 2
 Thread 1 returns: 0
 Thread 2 returns: 0

Operating System Concepts – 9th Edition 4.27 Silberschatz, Galvin and Gagne
Windows Multithreaded C Program

Operating System Concepts – 9th Edition 4.28 Silberschatz, Galvin and Gagne
Windows Multithreaded C Program (Cont.)

Operating System Concepts – 9th Edition 4.29 Silberschatz, Galvin and Gagne
Java Threads
 Java threads are managed by the JVM
 Typically implemented using the threads model provided by
underlying OS
 Java threads may be created by:

 Extending Thread class


 Implementing the Runnable interface

Operating System Concepts – 9th Edition 4.30 Silberschatz, Galvin and Gagne
Java Multithreaded Program

Operating System Concepts – 9th Edition 4.31 Silberschatz, Galvin and Gagne
Java Multithreaded Program (Cont.)

Operating System Concepts – 9th Edition 4.32 Silberschatz, Galvin and Gagne
Implicit Threading
 Growing in popularity as numbers of threads
increase, program correctness more difficult
with explicit threads
 Creation and management of threads done by
compilers and run-time libraries rather than
programmers
 Three methods explored
 Thread Pools
 OpenMP
 Grand Central Dispatch
 Other methods include Microsoft Threading
Building Blocks (TBB), java.util.concurrent
package
Operating System Concepts – 9th Edition 4.33 Silberschatz, Galvin and Gagne
Thread Pools
 Idea :
 Create a number of threads at process startup and place
them in a pool
 When a server receive a request, it awakens a thread form
pool
 Passing the thread the request to service
 Once the thread completes its service, it returns to the
pool
 If the pool contains no available thread, the server waits
until one becomes free.
 Advantages:
 Usually slightly faster to service a request with an existing
thread than create a new thread
 Allows the number of threads in the application(s) to be
bound to the size of the pool
 Unlimited threads could exhaust system resources
 Number of threads per pool: # CPU, size of memory,
# expected client requests, …
Operating System Concepts – 9th Edition 4.34 Silberschatz, Galvin and Gagne
OpenMP
 Set of compiler directives and an
API for C, C++, FORTRAN
 Provides support for parallel
programming in shared-memory
environments
 Identifies parallel regions –
blocks of code that can run in
parallel
#pragma omp parallel
Create as many threads as there are cores

#pragma omp parallel for


for(i=0;i<N;i++) {
c[i] = a[i] + b[i];
}
Run for loop in parallel
Operating System Concepts – 9th Edition 4.35 Silberschatz, Galvin and Gagne
Grand Central Dispatch
 Apple technology for Mac OS X and iOS
operating systems
 Extensions to C, C++ languages, API, and run-
time library
 Allows identification of parallel sections
 Manages most of the details of threading
 Block is in “^{ }”
} - ˆ{ printf("I am a block");
}
 Blocks placed in dispatch queue
 Assigned to available thread in thread pool when
removed from queue

Operating System Concepts – 9th Edition 4.36 Silberschatz, Galvin and Gagne
Grand Central Dispatch
 Two types of dispatch queues:
 serial – blocks removed in FIFO order, queue
is per process, called main queue
Programmers can create additional serial
queues within program
 concurrent – removed in FIFO order but
several may be removed at a time
Three system wide queues with priorities
low, default, high

Operating System Concepts – 9th Edition 4.37 Silberschatz, Galvin and Gagne
Threading Issues
 Semantics of fork() and exec() system calls
 Signal handling
 Synchronous and asynchronous
 Thread cancellation of target thread
 Asynchronous or deferred
 Thread-local storage
 Scheduler Activations

Operating System Concepts – 9th Edition 4.38 Silberschatz, Galvin and Gagne
Semantics of fork() and exec()
 fork() system call is used to create a separate, duplicate
process.
 Does fork() duplicate only the calling thread or all
threads?
 Some UNIX systems have two versions of fork()
 Duplicate all threads
 Duplicate only the thread that invoked the fork() system call
 The exec() system call works the same in the same way
 If a thread invokes the exec() the program specified in
the parameters of exec() will replace the entire process,
including all threads.
 If exec() is called immediately after forking  duplicate
only the calling thread
 If the separate process does not call exec() after forking
 duplicate all threads
Operating System Concepts – 9th Edition 4.39 Silberschatz, Galvin and Gagne
Signal Handling
 Signals are used in UNIX systems to notify a process that
a particular event has occurred
 A signal handler is used to process signals
1. Signal is generated by particular event
2. Signal is delivered to a process
3. Signal is handled
 Synchronous signals
 They are delivered to same process that performed the
operation causing the signal.
 If a running process performs either illegal memory
access or division by zero, a synchronous signal is
generated
 Asynchronous signals
 When a signal is generated by an event external to a
running process, that process receives the signal
asynchronously
 Terminate a process <Control>+<C>
 Timer expire
Operating System Concepts – 9th Edition 4.40 Silberschatz, Galvin and Gagne
Signal Handling (Cont.)
 Every signal has a default signal handler that is
run by the kernel when handing the signal
 This default action may be overridden by a user-
defined signal-handler function.
 Where should a single be delivered? Options:
 Deliver the signal to the thread to which the signal
applies
 Deliver the signal to every thread in the process
 Deliver the signal to certain threads in the process
 Assign a specific thread to receive all signals for the
process

Operating System Concepts – 9th Edition 4.41 Silberschatz, Galvin and Gagne
Thread Cancellation
 Cancellation means, Terminating a thread before it has
finished
 Thread to be canceled is target thread
 Two general approaches:
 Asynchronous cancellation terminates the target thread
immediately
 Deferred cancellation allows the target thread to
periodically check if it should be cancelled
 Pthread code to create and cancel a thread:

Operating System Concepts – 9th Edition 4.42 Silberschatz, Galvin and Gagne
Thread Cancellation (Cont.)
 Invoking thread cancellation requests cancellation, but actual
cancellation depends on thread state
 Pthreads supports three cancellation modes

 If thread has cancellation disabled, cancellation remains


pending until thread enables it
 Default type is deferred
 Cancellation only occurs when thread reaches cancellation
point
 I.e. pthread_testcancel()
 Then cleanup handler is invoked
 On Linux systems, thread cancellation is handled through
signals
Operating System Concepts – 9th Edition 4.43 Silberschatz, Galvin and Gagne
Thread-Local Storage
 Threads belonging to a process share the data of the process.
 However, in some circumstances, each thread might need its
own copy of certain data
 Thread-local storage (TLS) allows each thread to have its own
copy of data
 Useful when you do not have control over the thread creation
process (i.e., when using a thread pool)
 Different from local variables
 Local variables visible only during single function
invocation
 TLS visible across function invocations
 Similar to static data
 TLS is unique to each thread

Operating System Concepts – 9th Edition 4.44 Silberschatz, Galvin and Gagne
Scheduler Activations
 Both M:M and Two-level models require
communication to maintain the appropriate
number of kernel threads allocated to the
application
 Typically use an intermediate data structure
between user and kernel threads –
lightweight process (LWP)
 Appears to be a virtual processor on
which process can schedule user thread
to run
 Each LWP attached to kernel thread
 How many LWPs to create?
 Scheduler activations provide upcalls - a
communication mechanism from the kernel
to the upcall handler in the thread library
 This communication allows an application
to maintain the correct number kernel
threads
Operating System Concepts – 9th Edition 4.45 Silberschatz, Galvin and Gagne
Operating System Examples

 Windows Threads
 Linux Threads

Operating System Concepts – 9th Edition 4.46 Silberschatz, Galvin and Gagne
Windows Threads
 Windows implements the Windows API – primary
API for Win 98, Win NT, Win 2000, Win XP, Win 7,
Win 10
 A Windows application runs as a separate process,
and each process may contain one or more
threads.
 Implements the one-to-one mapping, kernel-level
 Each thread contains
 A thread id
 Register set representing state of processor
 Separate user and kernel stacks for when thread runs in user
mode or kernel mode
 Private data storage area used by run-time libraries and dynamic
link libraries (DLLs)
 The register set, stacks, and private storage area
are known as the context of the thread
Operating System Concepts – 9th Edition 4.47 Silberschatz, Galvin and Gagne
Windows Threads (Cont.)
 The primary data structures of a thread
include:
 ETHREAD (executive thread block) –
includes pointer to process to which thread
belongs and to KTHREAD, in kernel space
 KTHREAD (kernel thread block) –
scheduling and synchronization info,
kernel-mode stack, pointer to TEB, in
kernel space
 TEB (thread environment block) – A data
structure in user mode that includes,
thread id, user-mode stack, thread-local
storage

Operating System Concepts – 9th Edition 4.48 Silberschatz, Galvin and Gagne
Windows Threads Data Structures

Operating System Concepts – 9th Edition 4.49 Silberschatz, Galvin and Gagne
Linux Threads
 Linux refers to them as tasks rather than threads
 Thread creation is done through clone() system call
 clone() allows a child task to share the address space of the
parent task (process)
 Flags control behavior

 struct task_struct points to process data structures (shared


or unique)

Operating System Concepts – 9th Edition 4.50 Silberschatz, Galvin and Gagne
Homework
 4.1, 4.2, 4.4, 4.5, 4.8, 4.11, 4.15, 4.17,
4.18, (4.21)

Operating System Concepts – 9th Edition 4.51 Silberschatz, Galvin and Gagne
End of Chapter 4

Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne

You might also like