You are on page 1of 52

Chapter 4: Threads

Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013
Processes and Threads

 Process:
 An executing instance of a program is called a
process.
 set of threads
 collection of resources
 Thread:
 A thread is a series of executed statements
 Each thread is a portion of a program that can
execute concurrently with other threads
(multithreading)
 Thread shares the same resources allotted to the
process by the kernel.

Operating System Concepts – 9th Edition 4.2 Silberschatz, Galvin and Gagne ©2013
Processes and Threads

 Process:
 Processes refer to a completely
different portion of physical
memory.

 Thread:
 Thread is a subdivision that shares
the memory space of its parent
process.

A process/thread is active, while a program


is not.

Operating System Concepts – 9th Edition 4.3 Silberschatz, Galvin and Gagne ©2013
Chapter 4: Threads
 Overview
 Multicore Programming
 Multithreading Models
 Thread Implementation
 Thread Libraries
 Thread States
 Operating System Examples
 Windows XP Threads
 Linux Threads
 Multiprocessing

Operating System Concepts – 9th Edition 4.4 Silberschatz, Galvin and Gagne ©2013
Objectives
 To introduce the notion of a thread — a fundamental unit of CPU
utilization that forms the basis of multithreaded computer systems

 To discuss the APIs for the Pthreads, Win32, and Java thread
libraries

 To examine issues related to multithreaded programming

Operating System Concepts – 9th Edition 4.5 Silberschatz, Galvin and Gagne ©2013
Motivation
 Most modern applications are multithreaded
 Threads run within application
 Multiple tasks with the application can be implemented by separate threads
 Update display
 Fetch data
 Spell checking
 Answer a network request
 Process creation is heavy-weight while thread creation is light-weight
 Can simplify code, increase efficiency
 Kernels are generally multithreaded

Operating System Concepts – 9th Edition 4.6 Silberschatz, Galvin and Gagne ©2013
Multicore Programming

 Multicore or multiprocessor systems putting pressure on


programmers, challenges include:
 Dividing activities (Identifying tasks)
 Balance (tasks perform equal work of equal value)
 Data splitting (data accessed and manipulated by the tasks must
be divided to run on separate cores)
 Data dependency (between two or more tasks)
 Testing and debugging
 Parallelism implies a system can perform more than one task
simultaneously
 Concurrency supports more than one task making progress
 Single processor / core, scheduler providing concurrency

Operating System Concepts – 9th Edition 4.7 Silberschatz, Galvin and Gagne ©2013
Multicore Programming (Cont.)

 Types of parallelism
 Data parallelism – distributes subsets of the same data
across multiple cores, same operation on each

 Task parallelism – distributing threads across cores, each


thread performing unique operation
 As # of threads grows, so does architectural support for threading
 CPUs have cores as well as hardware threads
 Consider Oracle SPARC T4 with 8 cores, and 8 hardware
threads per core

Operating System Concepts – 9th Edition 4.8 Silberschatz, Galvin and Gagne ©2013
Concurrency vs. Parallelism

 Concurrent execution on single-core system:

 Parallelism on a multi-core system:

Operating System Concepts – 9th Edition 4.9 Silberschatz, Galvin and Gagne ©2013
Single and Multithreaded Processes

Operating System Concepts – 9th Edition 4.10 Silberschatz, Galvin and Gagne ©2013
Threads
 Each thread contains:
 an Program Counter (PC), a register with next instruction.
 a stack for temporary data (eg, return addresses, parameters)
 a data area for data declared globally and statically
 Remaining resources are shared amongst threads in a process
 A process/thread is active, while a program is not.
 Thread operations in user space:
 create, destroy, synch, context switch
 Communication between kernel and threads library
 shared data structures.
 interrupts as threads (user upcalls or signals). Example, for scheduling decisions
and preemption warnings.
 Kernel scheduler interface - allows dissimilar thread packages to coordinate.

Operating System Concepts – 9th Edition 4.11 Silberschatz, Galvin and Gagne ©2013
Benefits of Threads

 Takes less time to create a new thread than a process


 Thread creation can be 10-100 times faster than process creation
 Less time to terminate a thread than a process
 Less time to switch between two threads within the same process than to switch
between processes
 Lower context-switching overhead
 Threads can communicate via shared memory
 All threads within a process share same global memory
 processes have to rely on kernel services for IPC
 POSIX threads, Win32 threads, Java threads standard library
 A POSIX standard (IEEE 1003.1c) API for thread creation and synchronization
 supported by Linux and is portable to most UNIX platforms
 Has been ported to Windows
 Java threads are managed by the JVM

Operating System Concepts – 9th Edition 4.12 Silberschatz, Galvin and Gagne ©2013
Disadvantages of Threads
 Several actions that affect all of the threads in a process
 The OS must manage these at the process level.
 Many library functions are not thread-safe.

 Examples:
 Suspending a process involves suspending all threads of the process
 Termination of a process, terminates all threads within the process

Operating System Concepts – 9th Edition 4.13 Silberschatz, Galvin and Gagne ©2013
User Threads and Kernel Threads

 User threads - management done by user-level threads library


 Three primary thread libraries:
 POSIX Pthreads
 Windows threads
 Java threads
 Kernel threads - Supported by the Kernel
 Examples – virtually all general purpose operating systems, including:
 Windows
 Solaris
 Linux
 Tru64 UNIX
 Mac OS X

Operating System Concepts – 9th Edition 4.14 Silberschatz, Galvin and Gagne ©2013
Multithreading

 The ability of an OS to
support multiple,
concurrent paths of
execution within a single
process.
 Java run-time
environment is a single
process with multiple
threads

 Multiple processes and


threads are found in
Windows, Solaris, and
many modern versions of
UNIX

Operating System Concepts – 9th Edition 4.15 Silberschatz, Galvin and Gagne ©2013
Multithreading on a Uni-processor

 On a uniprocessor, multiprogramming enables the interleaving of multiple threads within


multiple processes.
 In the example,
 Three threads in two processes are interleaved on the processor.
 Execution passes from one thread to another either when the currently running thread
is blocked or its time slice is exhausted.

Operating System Concepts – 9th Edition 4.16 Silberschatz, Galvin and Gagne ©2013
Thread Implementation
 Categories
 User Level Thread (ULT)
 Kernel level Thread (KLT)
 kernel-supported threads
 lightweight processes (LWP)

 Models
 Many-to-One
 One-to-One
 Many-to-Many
 Two-level Model

Operating System Concepts – 9th Edition 4.17 Silberschatz, Galvin and Gagne ©2013
Thread Implementation

Process 1 Process 2

user
Thread libraries

kernel ......
...

P P P
hardware

Operating System Concepts – 9th Edition 4.18 Silberschatz, Galvin and Gagne ©2013
Dining philosophers

 Problem statement (Resource Handling)

 Five silent philosophers sit at a table


around a bowl of spaghetti.
 A fork is placed between each pair of
adjacent philosophers.
 A philosopher can only eat spaghetti
when he has both left and right forks.
 A philosopher can use the fork only if it's
not being used by another philosopher.
 After he finishes eating, he needs to put
down both the forks so they become
available to others.
 If philosopher grabs one fork but
the other fork is not available, then
the philosopher will keep holding
until he found another fork
 Eating is not limited by the amount of
spaghetti left: assume an infinite supply.

Operating System Concepts – 9th Edition 4.19 Silberschatz, Galvin and Gagne ©2013
User-Level Threads (ULT) (ex. Java)

 Many user-level threads mapped to


single kernel thread
 Kernel not aware of the existence of
threads
 Thread management handled by thread
library in user space
 No mode switch (kernel not involved)
 But I/O in one thread could block the
entire process!
 Multiple threads may not run in parallel
on muticore system because only one
may be in kernel at a time
 Few systems currently use this model
 Examples:
 Solaris Green Threads
“Many-to-One” model
 GNU Portable Threads

Operating System Concepts – 9th Edition 4.20 Silberschatz, Galvin and Gagne ©2013
Threads library
 Contains code for:
 creating and destroying threads
 passing messages and data between
threads
 scheduling thread execution
 pass control from one thread to another
 saving and restoring thread contexts

 ULT’s can be implemented on any Operating


System, because no kernel services are
required to support them

Operating System Concepts – 9th Edition 4.21 Silberschatz, Galvin and Gagne ©2013
Advantages and disadvantages of ULT

Advantages Disadvantages
 Thread switching does not involve  Most system calls are blocking for
the kernel: no mode switching processes. So all threads within a
 Therefore fast process will be implicitly blocked

 Scheduling can be application  The kernel can only assign


specific: choose the best algorithm processors to processes. Two
for the situation.
threads within the same process
cannot run simultaneously on two
 Can run on any OS. We only need a processors
thread library

Operating System Concepts – 9th Edition 4.22 Silberschatz, Galvin and Gagne ©2013
Kernel-Level Threads (KLT)
Ex: Windows NT, Windows 2000, OS/2

 Each user-level thread maps to kernel


thread
 Creating a user-level thread creates a
kernel thread
 All thread management is done by kernel
 No thread library
 Instead an API to the kernel thread
facility
 Kernel maintains context information for
the process and the threads
 Switching between threads requires the
kernel
 Kernel does Scheduling on a thread basis

“One-to-One” model

Operating System Concepts – 9th Edition 4.23 Silberschatz, Galvin and Gagne ©2013
Advantages and disadvantages of KLT

Advantages Disadvantages
 The kernel can schedule multiple  More concurrency than many-to-one
threads of the same process on
multiple processors
 Thread switching always involves the
kernel. This means 2 mode switches
 Blocking at thread level, not process per thread switch
level  Therefore Slow
 If a thread blocks, the CPU can be
assigned to another thread in the
 So it is slower compared to User
same process
Level Threads
 (But faster than a full process
 Even the kernel routines can be switch)
multithreaded

24

Operating System Concepts – 9th Edition 4.24 Silberschatz, Galvin and Gagne ©2013
Combined ULT/KLT Approaches
(e.g. Solaris)

 Allows many ULTs to be mapped to many


KLTs
 Allows the operating system to create a
sufficient number of kernel threads
 Thread creation done in the user space
L L
 Bulk of thread scheduling and
synchronization done in user space
 KLT’s are then assigned to processors
 Combines the best of both approaches

“Many-to-Many” model

Operating System Concepts – 9th Edition 4.25 Silberschatz, Galvin and Gagne ©2013
e.g. Solaris

Process 1 Process 2

user
L L L L Thread libraries

kernel ......
...

P P P
hardware

Operating System Concepts – 9th Edition 4.26 Silberschatz, Galvin and Gagne ©2013
Two-level Model

 Similar to M:M, except that it allows a user thread to be bound to


kernel thread
 Examples
 IRIX
 HP-UX
 Tru64 UNIX
 Solaris 8 and earlier

Operating System Concepts – 9th Edition 4.27 Silberschatz, Galvin and Gagne ©2013
Implementing Threads in User Space

Possible scheduling of user-level threads


• 50-msec process quantum
• threads run 5 msec/CPU burst

Operating System Concepts – 9th Edition 4.28 Silberschatz, Galvin and Gagne ©2013
Implementing Threads in the Kernel

Possible scheduling of kernel-level threads


• 50-msec process quantum
• threads run 5 msec/CPU burst

Operating System Concepts – 9th Edition 4.29 Silberschatz, Galvin and Gagne ©2013
Multithreaded Server Architecture

Operating System Concepts – 9th Edition 4.30 Silberschatz, Galvin and Gagne ©2013
Multithreaded Server Architecture

Operating System Concepts – 9th Edition 4.31 Silberschatz, Galvin and Gagne ©2013
Java Threads

 Java threads are managed by the JVM


 Typically implemented using the threads model provided by underlying
OS
 Java threads may be created by:

 Extending Thread class


 Implementing the Runnable interface

Operating System Concepts – 9th Edition 4.32 Silberschatz, Galvin and Gagne ©2013
Pthreads

 May be provided either as user-level or kernel-level


 A POSIX standard (IEEE 1003.1c) API for thread creation and
synchronization
 Specification, not implementation
 API specifies behavior of the thread library, implementation is up to
development of the library
 Common in UNIX operating systems (Solaris, Linux, Mac OS X)

Operating System Concepts – 9th Edition 4.33 Silberschatz, Galvin and Gagne ©2013
Thread States

Born

start()

exec()
Ready Running
Timeout

wake up() sleep()


resume() suspend() Completed
stop() Notify() Wait()
I/O available block on I/O Terminated

Dead Blocked Dead


Terminated()

Operating System Concepts – 9th Edition 4.34 Silberschatz, Galvin and Gagne ©2013
Thread States
 Three key states: Running, Ready, Blocked

 No Suspend state because all threads within the same process share the same
address space (same process image)
 Suspending implies swapping out the whole process, suspending all threads in
the process

 Termination of a process terminates all threads within the process


 Because the process is the environment the thread runs in

35

Operating System Concepts – 9th Edition 4.35 Silberschatz, Galvin and Gagne ©2013
Thread States
 Born state
 Thread just created
 When start() called, enters ready state
 Ready state
 Highest-priority ready thread enters running state for Execution
 Running state
 System assigns processor to thread (thread begins executing)
 When run completes or terminates, enters dead state
 Dead state
 Thread marked to be removed by system
 Entered when run terminates or throws uncaught exception

36

Operating System Concepts – 9th Edition 4.36 Silberschatz, Galvin and Gagne ©2013
Other Thread States
 Blocked state
 Entered from running state
 Blocked thread cannot use processor, even if available
 Common reason for blocked state - waiting on I/O request
 Sleeping state
 Entered when sleep method called
 Cannot use processor
 Enters ready state after sleep time expires
 Waiting state
 Entered when wait called in an object thread is accessing
 One waiting thread becomes ready when object calls notify
 notifyAll - all waiting threads become ready

37

Operating System Concepts – 9th Edition 4.37 Silberschatz, Galvin and Gagne ©2013
Operating System Examples

 Different Approaches to Processes and Threads in Different OS.


 How processes are named
 Whether threads are provided
 How processes are represented
 How process resources are protected
 What mechanisms are used for inter-process communication and
synchronization
 How processes are related to each other

 Example:
 Windows XP Threads
 Linux Thread

Operating System Concepts – 9th Edition 4.38 Silberschatz, Galvin and Gagne ©2013
Windows Thread States

Operating System Concepts – 9th Edition 4.39 Silberschatz, Galvin and Gagne ©2013
Linux Thread States

Operating System Concepts – 9th Edition 4.40 Silberschatz, Galvin and Gagne ©2013
Parallel Processor (Multiprocessing)

 Traditionally, the computer has been viewed as a


sequential machine.
 A processor executes instructions one at a time in
sequence
 Each instruction is a sequence of operations
 Multiprocessing is the use of two or more
processors within a single computer system.
 The key design issues include
 Parallel execution of processes or threads
 Scheduling
 Synchronization
 Memory Management
 Reliability and Fault Tolerance

Operating System Concepts – 9th Edition 4.41 Silberschatz, Galvin and Gagne ©2013
Parallel Processor Architectures

Operating System Concepts – 9th Edition 4.42 Silberschatz, Galvin and Gagne ©2013
Parallel Processor Architectures

Operating System Concepts – 9th Edition 4.43 Silberschatz, Galvin and Gagne ©2013
Tightly vs Loosely Coupled
 Tightly Coupled: Multiprocessors
 Delay in intercomputer communication is short,
 High data rate,
 Multiple CPU present on the same Computer Circuit,
 Communicate via wire etched on the boards.
 Tend to be parallel Systems working on Single Problem
 Loosely Coupled: Multicomputer
 Delay in intercomputer communication is high,
 Low data rate.
 Computers connect via modem 2400bps on the telephone network.
 Tend to be distributed systems working on manu unrelated problems

Operating System Concepts – 9th Edition 4.44 Silberschatz, Galvin and Gagne ©2013
Distributed H/W - Bus Based
Multiprocessors
 Multiple CPU connected to common bus, with a memory module.
 High speed backplane onto which CPU and Memory cards be inserted.
 Bus with 32/64 address/data/control lines.
 Memory read via: CPU puts address of the word on address line with
signal on control line, memory responds by putting word on data line.
 Coherent Memory (CPU A writes, B reads what A wrote)
 Cache used to avoid bottleneck at bus level. Snoopy Cache used as it
constantly eavesdrops/monitors the bus for changes
 This architecture supports upto 64 cpu

Operating System Concepts – 9th Edition 4.45 Silberschatz, Galvin and Gagne ©2013
Distributed H/W - Switch Based
Multiprocessors
 To build more than 64 CPU architecture, use a Cross bar switch.
 Each Memory and CPU intersects at a point known as Crosspoint switch,
where the switch may be opened/closed. (n cpu, n memory, n2 crosspoint
switches!)
 Many CPUs can access memory at the same time, BUT if 2 cpu try to
access same memory simultaneously, one has to wait

 Non UNiform Memory Access


(NUMA) Machines: CPU access local
memory quicky but other's memory
slowly.
 Placement of programs/data becomes
critical to make most access to local
memory. Complex Algorithms for good
data placement

Operating System Concepts – 9th Edition 4.46 Silberschatz, Galvin and Gagne ©2013
Distributed H/W - Bus based Multi
Computers
 Bus based Multiprocessors even with snoopy catches are limited by amount of bus
capacity to 64 cpu.
 Going beyond 64 Cpu requires expensive Crossbar switch. NUMA machines
require strong software placement algorithms
 Building a large, tightly coupled shared memory processor is possible but
difficult and expensive.

 Bus based Multi Computers:


 A multicomputer with no shared memory. Each CPU has direct connection to
local memory.
 Inter CPU communication via LAN (traffic volume for CPU to CPU is very low as
compared to CPU to Memory)
 Low speed LAN used instead of high speed backplane bus

Operating System Concepts – 9th Edition 4.47 Silberschatz, Galvin and Gagne ©2013
Distributed H/W - Switched Multi
Computers
 Each CPU had direct and exclusive/dedicated access to its own private
memory.
 A popular topology is hypercube (n dimensional cube). Fig shows 4
dimensional cube. 2 cubes with 8 vertices , 12 edges. Each vertex a
CPU with private memory.
 Corresponding vertices in each of the two cubes are connected. For n
dimensional hypercube, each CPU has n connections to other CPU (4
connections per CPU).
 Complexity of wiring increases logarithmically with size. As only nearest
neighbors are connected
 Many messages make several hops
to reach destination.
 Longest path grows logarithmically
with size.
 Hypercubes with 1024 CPUs
to 16,384 CPUs are available

Operating System Concepts – 9th Edition 4.48 Silberschatz, Galvin and Gagne ©2013
Symmetric Multiprocessing
 Multiple CPUs, residing in one cabinet,
share the same memory
 Kernel can execute on any processor
 Allowing portions of the kernel to
execute in parallel
 Typically each processor does self-
scheduling from the pool of available
process or threads
 SMP systems range from two to as many
as 32 or more processors
 Problem:
 If one CPU fails, the entire SMP
system is down.

Operating System Concepts – 9th Edition 4.49 Silberschatz, Galvin and Gagne ©2013
Cluster (Grid Computing)
 A set of loosely connected computers that work together
 Usually connected to each other through fast local area networks
 Each node running its own instance of an operating system.
 Cluster-based Multithreading
 CMT is just an ‘invention’ by AMD (in 2012)
 Increasing advancements in processor and networking technology leading towards CMT.

Operating System Concepts – 9th Edition 4.50 Silberschatz, Galvin and Gagne ©2013
Web & App Server Cluster Models

Operating System Concepts – 9th Edition 4.51 Silberschatz, Galvin and Gagne ©2013
End of Chapter 4

Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013

You might also like