You are on page 1of 63

Chapter 4: Threads &

Concurrency

Operating System Concepts Silberschatz, Galvin and Gagne ©2018


Chapter 4: Threads

q Overview

q Multicore Programming

q Multithreading Models

q Thread Libraries

q Implicit Threading

q Threading Issues

q Operating System Examples

Operating System Concepts 2 Silberschatz, Galvin and Gagne ©2018


Objectives

q Identify the basic components of a thread, and contrast threads and


processes

q Describe the benefits and challenges of designing multithreaded


applications

q Illustrate different approaches to implicit threading including thread


pools, fork-join, and Grand Central Dispatch

q Describe how the Windows and Linux operating systems represent


threads

q Design multithreaded applications using the Pthreads, Java, and


Windows threading APIs

Operating System Concepts 3 Silberschatz, Galvin and Gagne ©2018


Motivation

q Most modern applications are multithreaded

q Threads run within application

q Multiple tasks with the application can be implemented by separate


threads
o Update display
o Fetch data
o Spell checking
o Answer a network request
q Process creation is heavy-weight while thread creation is light-weight

q Can simplify code, increase efficiency

q Kernels are generally multithreaded

Operating System Concepts 4 Silberschatz, Galvin and Gagne ©2018


Single and Multithreaded Processes

Operating System Concepts 5 Silberschatz, Galvin and Gagne ©2018


Multithreaded Server Architecture

Operating System Concepts 6 Silberschatz, Galvin and Gagne ©2018


Benefits

q Responsiveness – may allow continued execution if part of process is


blocked, especially important for user interfaces

q Resource Sharing – threads share resources of process, easier than


shared memory or message passing (IPC)

q Economy – cheaper than process creation, thread switching lower


overhead than context switching

q Scalability – process can take advantage of multicore architectures

Operating System Concepts 7 Silberschatz, Galvin and Gagne ©2018


Multicore Programming

q Multicore or multiprocessor systems putting pressure on


programmers, challenges include:
o Dividing activities
o Balance
o Data splitting
o Data dependency
o Testing and debugging
q Parallelism implies a system can perform more than one task
simultaneously

q Concurrency supports more than one task making progress

o Single processor / core, scheduler providing concurrency

Operating System Concepts 8 Silberschatz, Galvin and Gagne ©2018


Concurrency vs. Parallelism

q Concurrent execution on single-core system:

q Parallelism on a multi-core system:

Operating System Concepts 9 Silberschatz, Galvin and Gagne ©2018


Multicore Programming

q Types of parallelism

o Data parallelism – distributes subsets of the same data across multiple


cores, same operation on each

o Task parallelism – distributing threads across cores, each thread


performing unique operation

Operating System Concepts 10 Silberschatz, Galvin and Gagne ©2018


Data and Task Parallelism

Operating System Concepts 11 Silberschatz, Galvin and Gagne ©2018


Amdahl’s Law

q Identifies performance gains from adding additional cores to an


application that has both serial and parallel components
o S is serial portion

o N processing cores

o That is, if application is 75% parallel / 25% serial, moving from 1 to 2


cores results in speedup of 1.6 times

o As N approaches infinity, speedup approaches 1/S

q Serial portion of an application has disproportionate effect on


performance gained by adding additional cores

q But does the law take into account contemporary multicore systems?

Operating System Concepts 12 Silberschatz, Galvin and Gagne ©2018


Amdahl’s Law

Operating System Concepts 13 Silberschatz, Galvin and Gagne ©2018


User Threads and Kernel Threads

q User threads - management done by user-level threads library

q Three primary thread libraries:

o POSIX Pthreads

o Windows threads

o Java threads

q Kernel threads - supported by the Kernel

q Examples – virtually all general purpose operating systems, including:


o Windows, Linux, Mac OS X
o iOS, Android

Operating System Concepts 14 Silberschatz, Galvin and Gagne ©2018


User and Kernel Threads

Operating System Concepts 15 Silberschatz, Galvin and Gagne ©2018


Multithreading Models

q Many-to-One

q One-to-One

q Many-to-Many

Operating System Concepts 16 Silberschatz, Galvin and Gagne ©2018


Many-to-One

q Many user-level threads mapped to single kernel thread

q One thread blocking causes all to block

q Multiple threads may not run in parallel on multicore system because


only one may be in kernel at a time
q Few systems currently use this model

q Examples:

o Solaris Green Threads

o GNU Portable Threads

Operating System Concepts 17 Silberschatz, Galvin and Gagne ©2018


One-to-One

q Each user-level thread maps to one kernel thread

q Creating a user-level thread creates a kernel thread

q More concurrency than many-to-one

q Number of threads per process sometimes restricted due to overhead

q Examples

o Windows

o Linux

Operating System Concepts 18 Silberschatz, Galvin and Gagne ©2018


Many-to-Many Model

q Allows many user level threads to be mapped to many kernel threads

q Allows the operating system to create a sufficient number of kernel


threads
q Windows with the ThreadFiber package

q Otherwise not very common

Operating System Concepts 19 Silberschatz, Galvin and Gagne ©2018


Two-level Model

q Similar to Many-to-Many, except that it allows a user thread to be


bound to kernel thread

Operating System Concepts 20 Silberschatz, Galvin and Gagne ©2018


Thread Libraries

q Thread library provides programmer with API for creating and


managing threads

q Two primary ways of implementing

o Library entirely in user space

o Kernel-level library supported by the OS

Operating System Concepts 21 Silberschatz, Galvin and Gagne ©2018


Pthreads

q May be provided either as user-level or kernel-level

q A POSIX standard (IEEE 1003.1c) API for thread creation and


synchronization

q Specification, not implementation

q API specifies behavior of the thread library, implementation is up to


development of the library

q Common in UNIX operating systems (Linux & Mac OS X)

Operating System Concepts 22 Silberschatz, Galvin and Gagne ©2018


Pthreads Example

Operating System Concepts 23 Silberschatz, Galvin and Gagne ©2018


Pthreads Example (cont)

Operating System Concepts 24 Silberschatz, Galvin and Gagne ©2018


Pthreads CodeCode
Pthreads for Joining 10 Threads
for Joining 10 Threads

Operating System Concepts ñ 9th Edition 4.21 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts 25 Silberschatz, Galvin and Gagne ©2018
Windows Multithreaded C Program

Operating System Concepts 26 Silberschatz, Galvin and Gagne ©2018


Windows Multithreaded C Program (Cont.)

Operating System Concepts 27 Silberschatz, Galvin and Gagne ©2018


Java Threads

q Java threads are managed by the JVM

q Typically implemented using the threads model provided by


underlying OS

q Java threads may be created by:

o Extending Thread class

o Implementing the Runnable interface

o Standard practice is to implement Runnable interface

Operating System Concepts 28 Silberschatz, Galvin and Gagne ©2018


Java Threads

q Implementing Runnable interface:

q Creating a thread:

q Waiting on a thread:

Operating System Concepts 29 Silberschatz, Galvin and Gagne ©2018


Java Executor Framework

q Rather than explicitly creating threads, Java also allows thread


creation around the Executor interface:

q The Executor is used as follows:

Operating System Concepts 30 Silberschatz, Galvin and Gagne ©2018


Java Executor Framework

Operating System Concepts 31 Silberschatz, Galvin and Gagne ©2018


Java Executor Framework (cont.)

Operating System Concepts 32 Silberschatz, Galvin and Gagne ©2018


Implicit Threading

q Growing in popularity as numbers of threads increase, program


correctness more difficult with explicit threads
q Creation and management of threads done by compilers and run-time
libraries rather than programmers

q Five methods explored

o Thread Pools

o Fork-Join

o OpenMP

o Grand Central Dispatch

o Intel Threading Building Blocks

Operating System Concepts 33 Silberschatz, Galvin and Gagne ©2018


Thread Pools

q Create a number of threads in a pool where they await work

q Advantages:
o Usually slightly faster to service a request with an existing thread than
create a new thread
o Allows the number of threads in the application(s) to be bound to the size
of the pool
o Separating task to be performed from mechanics of creating task allows
different strategies for running task
4 i.e., Tasks could be scheduled to run periodically

q Windows API supports thread pools:

Operating System Concepts 34 Silberschatz, Galvin and Gagne ©2018


Java Thread Pools

q Three factory methods for creating thread pools in Executors class:

Operating System Concepts 35 Silberschatz, Galvin and Gagne ©2018


Java Thread Pools (cont)

Operating System Concepts 36 Silberschatz, Galvin and Gagne ©2018


Fork-Join Parallelism

q Multiple threads (tasks) are forked, and then joined.

Operating System Concepts 37 Silberschatz, Galvin and Gagne ©2018


Fork-Join Parallelism

q General algorithm for fork-join strategy:

Operating System Concepts 38 Silberschatz, Galvin and Gagne ©2018


Fork-Join Parallelism

Operating System Concepts 39 Silberschatz, Galvin and Gagne ©2018


Fork-Join Parallelism in Java

Operating System Concepts 40 Silberschatz, Galvin and Gagne ©2018


Fork-Join Parallelism in Java

Operating System Concepts 41 Silberschatz, Galvin and Gagne ©2018


Fork-Join Parallelism in Java

q The ForkJoinTask is an abstract base class

q RecursiveTask and RecursiveAction classes extend


ForkJoinTask

q RecursiveTask returns a result (via the return value from the


compute() method)
q RecursiveAction does not return a result

Operating System Concepts 42 Silberschatz, Galvin and Gagne ©2018


OpenMP

q Set of compiler directives


and an API for C, C++,
FORTRAN

q Provides support for parallel


programming in shared-
memory environments
q Identifies parallel regions –
blocks of code that can run
in parallel
q #pragma omp parallel

q Create as many threads as


there are cores

Operating System Concepts 43 Silberschatz, Galvin and Gagne ©2018


OpenMP Example

q Run the for loop in parallel

Operating System Concepts 44 Silberschatz, Galvin and Gagne ©2018


Grand Central Dispatch

q Apple technology for macOS and iOS operating systems

q Extensions to C, C++ and Objective-C languages, API, and run-time


library

q Allows identification of parallel sections

q Manages most of the details of threading

q Block is in “^{ }” :

ˆ{ printf("I am a block"); }

q Blocks placed in dispatch queue

o Assigned to available thread in thread pool when removed from queue

Operating System Concepts 45 Silberschatz, Galvin and Gagne ©2018


Grand Central Dispatch

q Two types of dispatch queues:

o serial – blocks removed in FIFO order, queue is per process, called main
queue

4 Programmers can create additional serial queues within program

o concurrent – removed in FIFO order but several may be removed at a


time

4 Four system wide queues divided by quality of service:

– QOS_CLASS_USER_INTERACTIVE

– QOS_CLASS_USER_INITIATED

– QOS_CLASS_USER_UTILITY

– QOS_CLASS_USER_BACKGROUND

Operating System Concepts 46 Silberschatz, Galvin and Gagne ©2018


Grand Central Dispatch

q For the Swift language a task is defined as a closure – similar to a


block, minus the caret
q Closures are submitted to the queue using the dispatch_async()
function:

Operating System Concepts 47 Silberschatz, Galvin and Gagne ©2018


Intel Threading Building Blocks (TBB)

q Template library for designing parallel C++ programs

q A serial version of a simple for loop

q The same for loop written using TBB with parallel_for


statement:

Operating System Concepts 48 Silberschatz, Galvin and Gagne ©2018


Threading Issues

q Semantics of fork() and exec() system calls

q Signal handling

o Synchronous and asynchronous

q Thread cancellation of target thread

o Asynchronous or deferred

q Thread-local storage

q Scheduler Activations

Operating System Concepts 49 Silberschatz, Galvin and Gagne ©2018


Semantics of fork() and exec()

q Does fork() duplicate only the calling thread or all threads?

o Some UNIXes have two versions of fork

q exec() usually works as normal – replace the running process


including all threads

Operating System Concepts 50 Silberschatz, Galvin and Gagne ©2018


Signal Handling

q Signals are used in UNIX systems to notify a process that a


particular event has occurred.

q A signal handler is used to process signals


o Signal is generated by particular event
o Signal is delivered to a process
o Signal is handled by one of two signal handlers
4 default
4 user-defined

q Every signal has a default handler that kernel runs when handling
signal
o User-defined signal handler can override default

o For single-threaded, signal delivered to process

Operating System Concepts 51 Silberschatz, Galvin and Gagne ©2018


Signal Handling (Cont.)

q Where should a signal be delivered for multi-threaded?

o Deliver the signal to the thread to which the signal applies

o Deliver the signal to every thread in the process

o Deliver the signal to certain threads in the process

o Assign a specific thread to receive all signals for the process

Operating System Concepts 52 Silberschatz, Galvin and Gagne ©2018


Thread Cancellation

q Terminating a thread before it has finished

q Thread to be canceled is target thread

q Two general approaches:

o Asynchronous cancellation
terminates the target thread
immediately

o Deferred cancellation allows the


target thread to periodically
check if it should be cancelled

q Pthread code to create and


cancel a thread:
Operating System Concepts 53 Silberschatz, Galvin and Gagne ©2018
Thread Cancellation (Cont.)

q Invoking thread cancellation requests cancellation, but actual


cancellation depends on thread state

q If thread has cancellation disabled, cancellation remains pending until


thread enables it

q Default type is deferred


o Cancellation only occurs when thread reaches cancellation point
4 i.e., pthread_testcancel()
4 Then cleanup handler is invoked

q On Linux systems, thread cancellation is handled through signals

Operating System Concepts 54 Silberschatz, Galvin and Gagne ©2018


Thread Cancellation in Java

q Deferred cancellation uses the interrupt() method, which sets the


interrupted status of a thread.

q A thread can then check to see if it has been interrupted:

Operating System Concepts 55 Silberschatz, Galvin and Gagne ©2018


Thread-Local Storage

q Thread-local storage (TLS) allows each thread to have its own copy
of data

q Useful when you do not have control over the thread creation process
(i.e., when using a thread pool)

q Different from local variables

o Local variables visible only during single function invocation

o TLS visible across function invocations

q Similar to static data

o TLS is unique to each thread

Operating System Concepts 56 Silberschatz, Galvin and Gagne ©2018


Scheduler Activations

q Both M:M and Two-level models require communication to maintain


the appropriate number of kernel threads allocated to the application

q Typically use an intermediate data structure between user and kernel


threads – lightweight process (LWP)
o Appears to be a virtual processor on which process can schedule user
thread to run

o Each LWP attached to kernel thread

o How many LWPs to create?


q Scheduler activations provide upcalls - a
communication mechanism from the kernel
to the upcall handler in the thread library

q This communication allows an application to


maintain the correct number kernel threads
Operating System Concepts 57 Silberschatz, Galvin and Gagne ©2018
Operating System Examples

q Windows Threads

q Linux Threads

Operating System Concepts 58 Silberschatz, Galvin and Gagne ©2018


Windows Threads

q Windows API – primary API for Windows applications

q Implements the one-to-one mapping, kernel-level

q Each thread contains

o A thread id

o Register set representing state of processor

o Separate user and kernel stacks for when thread runs in user mode or
kernel mode

o Private data storage area used by run-time libraries and dynamic link
libraries (DLLs)

q The register set, stacks, and private storage area are known as the
context of the thread

Operating System Concepts 59 Silberschatz, Galvin and Gagne ©2018


Windows Threads (Cont.)

q The primary data structures of a thread include:

o ETHREAD (executive thread block) – includes pointer to process to


which thread belongs and to KTHREAD, in kernel space

o KTHREAD (kernel thread block) – scheduling and synchronization info,


kernel-mode stack, pointer to TEB, in kernel space

o TEB (thread environment block) – thread id, user-mode stack, thread-


local storage, in user space

Operating System Concepts 60 Silberschatz, Galvin and Gagne ©2018


Windows Threads Data Structures

Operating System Concepts 61 Silberschatz, Galvin and Gagne ©2018


Linux Threads

q Linux refers to them as tasks rather than threads

q Thread creation is done through clone() system call

q clone() allows a child task to share the address space of the


parent task (process)
o Flags control behavior

q struct task_struct points to process data structures (shared or


unique)

Operating System Concepts 62 Silberschatz, Galvin and Gagne ©2018


End of Chapter 4

Operating System Concepts Silberschatz, Galvin and Gagne ©2018

You might also like